About the Journal
Journal Summary
The International Journal of Artificial Intelligence and Cybersecurity (IJAIC) is a peer-reviewed, open-access journal dedicated to publishing cutting-edge research and reviews at the intersection of artificial intelligence (AI) and cybersecurity. IJAIC serves as a global hub for fostering collaboration among researchers, practitioners, and policymakers while addressing the challenges of securing the digital landscape with AI-driven innovations.
Key Features of IJAIC:
- Open-access model to promote global knowledge sharing and accessibility.
- A focus on multidisciplinary research combining theory, practice, and ethics in AI and cybersecurity.
- High standards of peer review, ensuring quality and rigor in every publication.
- Strong emphasis on ethical AI practices and solutions to emerging cybersecurity threats.
Current Issue
Bias and Fairness in AI Models is a critical issue in the integration of artificial intelligence (AI) in healthcare. It refers to the potential for AI systems to perpetuate or exacerbate existing biases and disparities, leading to unfair treatment or outcomes for certain groups of patients. This can occur due to several factors:
Data Bias: AI models are often trained on datasets that may not adequately represent diverse patient populations, such as racial and ethnic minorities. This can result in models that perform less accurately for these groups, leading to misdiagnosis or unequal access to treatment.
Algorithmic Bias: Biases can be embedded in the algorithms themselves, affecting how data is processed and decisions are made. For example, some algorithms may use cost as a proxy for illness, which can inadequately identify health needs in certain populations.
Healthcare Disparities: AI can perpetuate existing healthcare disparities by disproportionately benefiting well-resourced populations while excluding those with limited access to healthcare services26.
Ethical and Legal Concerns: Bias in AI models raises ethical and legal questions about accountability, transparency, and patient consent. Ensuring fairness requires diverse and representative data, algorithm audits, and collaboration among stakeholders.
Strategies to address these biases include using diverse datasets, promoting transparency in AI decision-making, and engaging patients and communities in AI development. By addressing bias and ensuring fairness, AI can be used more effectively to improve healthcare outcomes for all patients.