See how this article has been cited at scite.ai
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.
AdadiA., & BerradaM. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6. https://doi.org/10.1109/ACCESS.2018.2870052
AdnanM., UddinM. I., KhanE., AlharithiF. S., AminS., & AlzahraniA.A. (2022). Earliest Possible Global and Local Interpretation of Students’ Performance in Virtual Learning Environment by Leveraging Explainable AI. IEEE Access, 10(December), 129843–129864. https://doi.org/10.1109/ACCESS.2022.3227072
AlbahriA. S., DuhaimA. M., FadhelM. A., AlnoorA., BaqerN. S., AlzubaidiL., … DeveciM. (2023). A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare: Assessment of Quality, Bias Risk, and Data Fusion. Information Fusion, 96(January), 156–191. https://doi.org/10.1016/j.inffus.2023.03.008
AlonsoJ. M., & CasalinoG. (2019). Explainable Artificial Intelligence for Human-Centric Data Analysis in Virtual Learning Environments. Communications in Computer and Information Science, 1091(September), 125–138. https://doi.org/10.1007/978-3-030-31284-8_10
AlotaibiA., & SasC. (2023). Review of AI-Based Mental Health Apps. in Proceedings of British HCI Conference 2023, 13 pages, DOI: 10.14236/ewic/BCSHCI2023.27
AlperinK. B., WollaberA. B., & GomezS.R. (2020). Improving Interpretability for Cyber Vulnerability Assessment Using Focus and Context Visualizations. 2020 IEEE Symposium on Visualization for Cyber Security, VizSec 2020, 30–39. https://doi.org/10.1109/VizSec51108.2020.00011
AngelovP. P., SoaresE. A., JiangR., ArnoldN. I., & AtkinsonP.M. (2021). Explainable artificial intelligence: an analytical review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(5). https://doi.org/10.1002/widm.1424
AnjomshoaeS., CalvaresiD., NajjarA., & FrämlingK. (2019). Explainable agents and robots: Results from a systematic literature review. Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, 2 (Aamas), 1078–1088. https://doi.org/10.5555/3306127.3331806
AntoniadiA. M., DuY., GuendouzY., WeiL., MazoC., BeckerB. A., & MooneyC. (2021). Current challenges and future opportunities for xai in machine learning-based clinical decision support systems: A systematic review. Applied Sciences (Switzerland), 11(11), 1–23. https://doi.org/10.3390/app11115088
BachT. A., KhanA., HallockH., BeltrãoG., & SousaS. (2024). A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. International Journal of Human-Computer Interaction, 40(5), 1251–1266. https://doi.org/10.1080/10447318.2022.2138826
BanieckiH., ParzychD., & BiecekP. (2023). The grammar of interactive explanatory model analysis. Data Mining and Knowledge Discovery, (January). https://doi.org/10.1007/s10618-023-00924-w
BarredoA., Díaz-RodríguezN., DelJ., BennetotA., TabikS., BarbadoA., … HerreraF. (2020). Explainable Artificial Intelligence ( XAI ): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58 (October 2019), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
BertiniF., Dal Palù, A., FabianoF., & IottiE. (2022). CARING for xAI. CEUR Workshop Proceedings, 3204, 47–60.
BistarelliS., MancinelliA., SantiniF., & TaticchiC. (2022). Arg-XAI: a Tool for Explaining Machine Learning Results. Proceedings -International Conference on Tools with Artificial Intelligence, ICTAI, 2022-Octob, 205–212. https://doi.org/10.1109/ICTAI56018.2022.00037
BrdnikS. (2023). GUI Design Patterns for Improving the HCI in Explainable Artificial Intelligence. International Conference on Intelligent User Interfaces, Proceedings IUI, 240–242. https://doi.org/10.1145/3581754.3584114
CartaS., ConsoliS., PoddaA. S., Reforgiato RecuperoD., & StanciuM.M. (2022). An eXplainable Artificial Intelligence tool for statistical arbitrage. Software Impacts, 14(June), 100354. https://doi.org/10.1016/j.simpa.2022.100354
ChalabianlooN., CanY. S., UmairM., SasC., & ErsoyC. (2022). Application level performance evaluation of wearable devices for stress classification with explainable AI. Pervasive and Mobile Computing, 87, 101703. https://doi.org/10.1016/j.pmcj.2022.101703
ChattopadhayA., SarkarA., HowladerP., & BalasubramanianV.N. (2018). Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks. Proceedings -2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018, 2018-January, 839–847. https://doi.org/10.1109/WACV.2018.00097
ChromikM., & ButzA. (2021). Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12933 LNCS, 619–640. https://doi.org/10.1007/978-3-030-85616-8_36
ChromikM., & SchuesslerM. (2020). A taxonomy for human subject evaluation of black-box explanations in XAI. CEUR Workshop Proceedings, 2582.
ClementT., KemmerzellN., AbdelaalM., & AmbergM. (2023). XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process. Machine Learning and Knowledge Extraction, 5(1), 78–108. https://doi.org/10.3390/make5010006
DieberJ., & KirraneS. (2020). Why model why? Assessing the strengths and limitations of LIME. ArXiv, abs/2012.00093.
DindorfC., KonradiJ., WolfC., TaetzB., BleserG., HuthwelkerJ., … FröhlichM. (2021). Classification and automated interpretation of spinal posture data using a pathologyindependent classifier and explainable artificial intelligence (Xai). Sensors, 21(18), 1–16. https://doi.org/10.3390/s21186323
DosovitskiyA., BeyerL., KolesnikovA., WeissenbornD., ZhaiX., UnterthinerT., … HoulsbyN. (2020). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.
EhsanU., LiaoQ. V., MullerM., RiedlM. O., & WeiszJ.D. (2021). Expanding explainability: Towards social transparency in ai systems. Conference on Human Factors in Computing Systems -Proceedings. https://doi.org/10.1145/3411764.3445188
EhsanU., WintersbergerP., LiaoQ. V., WatkinsE. A., MangerC., Daumé, H., … RiedlM.O. (2022). Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI. Conference on Human Factors in Computing Systems -Proceedings. https://doi.org/10.1145/3491101.3503727
EhsanU., WintersbergerP., WatkinsE. A., RamosG., WeiszJ. D., RienerA., & RiedlM.O. (2023). Human-Centered Explainable AI (HCXAI): Coming of Age. https://doi.org/10.1145/3544549.3573832
FeredayJ., & Muir-CochraneE. (2006). Demonstrating Rigor Using Thematic Analysis: A Hybrid Approach of Inductive and Deductive Coding and Theme Development. International Journal of Qualitative Methods, 5(1), 80–92. https://doi.org/10.1177/160940690600500107
FouladgarN., AlirezaieM., & FramlingK. (2022). Metrics and Evaluations of Time Series Explanations: An Application in Affect Computing. IEEE Access, 10, 23995–24009. https://doi.org/10.1109/ACCESS.2022.3155115
GalliA., PiscitelliM. S., MoscatoV., & CapozzoliA. (2022). Bridging the gap between complexity and interpretability of a data analytics-based process for benchmarking energy performance of buildings. Expert Systems with Applications, 206(June 2021), 117649. https://doi.org/10.1016/j.eswa.2022.117649
GandolfiM., GalazzoI. B., PavanR. G., CrucianiF., ValeN., PicelliA., … MenegazG. (2023). eXplainable AI Allows Predicting Upper Limb Rehabilitation Outcomes in Sub-Acute Stroke Patients. IEEE Journal of Biomedical and Health Informatics, 27(1), 263–273. https://doi.org/10.1109/JBHI.2022.3220179
GulmezogluB. (2022). XAI-Based Microarchitectural Side-Channel Analysis for Website Fingerprinting Attacks and Defenses. IEEE Transactions on Dependable and Secure Computing, 19(6), 4039–4051. https://doi.org/10.1109/TDSC.2021.3117145
HeberleH., ZhaoL., SchmidtS., WolfT., & HeinrichJ. (2023). XSMILES: interactive visualization for molecules, SMILES and XAI attribution scores. Journal of Cheminformatics, 15(1), 1–12. https://doi.org/10.1186/s13321-022-00673-w
HeimerlA., BaurT., LingenfelserF., WagnerJ., & AndreE. (2019). NOVA -A tool for eXplainable Cooperative Machine Learning. 2019 8th International Conference on Affective Computing and Intelligent Interaction, ACII 2019, (Xcml). https://doi.org/10.1109/ACII.2019.8925519
HenriksenE., HaldenU., KuzluM., & CaliU. (2022). Electrical Load Forecasting Utilizing an Explainable Artificial Intelligence (XAI) Tool on Norwegian Residential Buildings. SEST 2022 -5th International Conference on Smart Energy Systems and Technologies, 1–6. https://doi.org/10.1109/SEST53650.2022.9898500
HoffmanR. R., MuellerS. T., KleinG., & LitmanJ. (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science, 5. https://doi.org/10.3389/fcomp.2023.1096257
HolzingerA., CarringtonA., & MüllerH. (2020). Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations. KI -Kunstliche Intelligenz, 34(2), 193–198. https://doi.org/10.1007/s13218-020-00636-z
IslamM. S., HussainI., RahmanM. M., ParkS. J., & HossainM.A. (2022). Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal. Sensors, 22(24). https://doi.org/10.3390/s22249859
JayakumarK., & SkandhakumarN. (2022). A Visually Interpretable Forensic Deepfake Detection Tool Using Anchors. 7th International Conference on Information Technology Research: Digital Resilience and Reinvention, ICITR 2022 -Proceedings. https://doi.org/10.1109/ICITR57877.2022.9993294
KadirM. A., Mohamed SelimA., BarzM., & SonntagD. (2023). A User Interface for Explaining Machine Learning Model Explanations. International Conference on Intelligent User Interfaces, Proceedings IUI, 59–63. https://doi.org/10.1145/3581754.3584131
KapciaM., EshkikiH., DuellJ., FanX., ZhouS., & MoraB. (2021). ExMed: An AI Tool for Experimenting Explainable AI Techniques on Medical Data Analytics. Proceedings -International Conference on Tools with Artificial Intelligence, ICTAI, 2021-Novem, 841–845. https://doi.org/10.1109/ICTAI52525.2021.00134
KelekoA. T., Kamsu-FoguemB., NgounaR. H., & TongneA. (2023). Health condition monitoring of a complex hydraulic system using Deep Neural Network and DeepSHAP explainable XAI. Advances in Engineering Software, 175(January 2022), 103339. https://doi.org/10.1016/j.advengsoft.2022.103339
KimJ. K., BaeM. N., LeeK., KimJ. C., & HongS.G. (2022). Explainable Artificial Intelligence and Wearable Sensor-Based Gait Analysis to Identify Patients with Osteopenia and Sarcopenia in Daily Life. Biosensors, 12(3). https://doi.org/10.3390/bios12030167
KonradiJ., ZajberM., BetzU., DreesP., GerkenA., & MeineH. (2022). AI-Based Detection of Aspiration for Video-Endoscopy with Visual Aids in Meaningful Frames to Interpret the Model Outcome. Sensors, 22(23). https://doi.org/10.3390/s22239468
KumarA., ManikandanR., KoseU., GuptaD., & SatapathyS.C. (2021). Doctor’s dilemma: Evaluating an explainable subtractive spatial lightweight convolutional neural network for brain tumor diagnosis. ACM Transactions on Multimedia Computing, Communications and Applications, 17(3s). https://doi.org/10.1145/3457187
KuzluM., CaliU., SharmaV., & Güler, Ö. (2020). Gaining insight into solar photovoltaic power generation forecasting utilizing explainable artificial intelligence tools. IEEE Access, 8, 187814–187823. https://doi.org/10.1109/ACCESS.2020.3031477
LaatoS., TiainenM., Najmul IslamA.K.M., & MäntymäkiM. (2021). How to explain AI systems to end users: a systematic literature review and research agenda. Internet Research, 32(7), 1–31. https://doi.org/10.1108/INTR-08-2021-0600
LangleyP., MeadowsB., SridharanM., & ChoiD. (2017). Explainable Agency for Intelligent Autonomous Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 31(2), 4762–4763. https://doi.org/10.1609/aaai.v31i2.19108
LeungC. K., PazdorA. G. M., & SouzaJ. (2021). Explainable Artificial Intelligence for Data Science on Customer Churn. 2021 IEEE 8th International Conference on Data Science and Advanced Analytics, DSAA 2021. https://doi.org/10.1109/DSAA53316.2021.9564166
LiangY., LiS., YanC., LiM., & JiangC. (2021). Explaining the black-box model: A survey of local interpretation methods for deep neural networks. Neurocomputing, 419, 168–182. https://doi.org/10.1016/j.neucom.2020.08.011
LongoL., GoebelR., LecueF., KiesebergP., & HolzingerA. (2020). Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12279 LNCS, 1–16. https://doi.org/10.1007/978-3-030-57321-8_1
LopesP., SilvaE., BragaC., OliveiraT., & RosadoL. (2022). XAI Systems Evaluation: A Review of Human and Computer-Centred Methods. Applied Sciences (Switzerland), 12(19). https://doi.org/10.3390/app12199423
LoveP. E. D., FangW., MatthewsJ., PorterS., LuoH., & DingL. (2022). Explainable Artificial Intelligence: Precepts, Methods, and Opportunities for Research in Construction. ArXiv Preprint ArXiv:2211.06579, 1–58.
LucieriA., BajwaM. N., BraunS. A., MalikM. I., DengelA., & AhmedS. (2022). ExAID: A multimodal explanation framework for computeraided diagnosis of skin lesions. Computer Methods and Programs in Biomedicine, 215. https://doi.org/10.1016/j.cmpb.2022.106620
LundbergS. M., & LeeS.I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 2017-Decem(Section 2), 4766–4775.
MalandriL., MercorioF., MezzanzanicaM., & NobaniN. (2022). ConvXAI: a System for Multimodal Interaction with Any Black-box Explainer. In Cognitive Computation. Springer US. https://doi.org/10.1007/s12559-022-10067-7
Mandeep, AgarwalA., BhatiaA., MalhiA., KalerP., & PannuH.S. (2022). Machine Learning Based Explainable Financial Forecasting. 2022 4th International Conference on Computer Communication and the Internet, ICCCI 2022, 34–38. https://doi.org/10.1109/ICCCI55554.2022.9850272
MercorioF., MezzanzanicaM., & SevesoA. (2020). eXDiL: A Tool for Classifying and eXplaining Hospital Discharge Letters. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12279 LNCS(DiL), 159–172. https://doi.org/10.1007/978-3-030-57321-8_9
MoherD., LiberatiA., TetzlaffJ., & AltmanD.G. (2010). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. International Journal of Surgery, 8(5), 336–341. https://doi.org/10.1016/j.ijsu.2010.02.007
Moreno-SanchezP.A. (2020). Development of an Explainable Prediction Model of Heart Failure Survival by Using Ensemble Trees. Proceedings -2020 IEEE International Conference on Big Data, Big Data 2020, 4902–4910. https://doi.org/10.1109/BigData50022.2020.9378460
MoscatoV., PicarielloA., & Sperlí, G. (2021). A benchmark of machine learning approaches for credit score prediction. Expert Systems with Applications, 165(May 2020), 113986. https://doi.org/10.1016/j.eswa.2020.113986
MuchaH., RobertS., BreitschwerdtR., & FellmannM. (2021). Interfaces for Explanations in Human-AI Interaction: Proposing a Design Evaluation Approach. Conference on Human Factors in Computing Systems -Proceedings. https://doi.org/10.1145/3411763.3451759
MuhammadA. P., KnaussE., & BärgmanJ. (2023). Human factors in developing automated vehicles: A requirements engineering perspective. Journal of Systems and Software, 205, 111810. https://doi.org/10.1016/j.jss.2023.111810
MuhammadK., UllahA., LloretJ., SerJ. Del, & De AlbuquerqueV.H.C. (2021). Deep Learning for Safe Autonomous Driving: Current Challenges and Future Directions. IEEE Transactions on Intelligent Transportation Systems, 22(7). https://doi.org/10.1109/TITS.2020.3032227
NagyM., & MolontayR. (2023). Interpretable Dropout Prediction: Towards XAI-Based Personalized Intervention. International Journal of Artificial Intelligence in Education, (0123456789). https://doi.org/10.1007/s40593-023-00331-8
NaisehM., JiangN., MaJ., & AliR. (2020). Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks. Lecture Notes in Business Information Processing, 385 LNBIP(March), 212–228. https://doi.org/10.1007/978-3-030-50316-1_13
NaisehM., SimkuteA., ZieniB., JiangN., & AliR. (2024). C-XAI: A Conceptual Framework for Designing XAI tools that Support Trust Calibration. Journal of Responsible Technology, 100076. https://doi.org/10.1016/j.jrt.2024.100076
NautaM., TrienesJ., PathakS., NguyenE., PetersM., SchmittY., … SeifertC. (2023). From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Computing Surveys. https://doi.org/10.1145/3583558
NazarM., AlamM. M., YafiE., & Su’UdM.M. (2021). A Systematic Review of Human-Computer Interaction and Explainable Artificial Intelligence in Healthcare with Artificial Intelligence Techniques. IEEE Access, 9, 153316–153348. https://doi.org/10.1109/ACCESS.2021.3127881
OduorE., QianK., LiY., & PopaL. (2020). XAIT: An interactivewebsite for explainable ai for text. International Conference on Intelligent User Interfaces, Proceedings IUI, 120–121. https://doi.org/10.1145/3379336.3381468
OliveiraE., BragaC., SampaioA., OliveiraT., SoaresF., & RosadoL. (2023). Designing XAIbased Computer-aided Diagnostic Systems: Operationalising User Research Methods. CEUR Workshop Proceedings, 3359, 25–36.
PaniguttiC., BerettaA., FaddaD., GiannottiF., PedreschiD., PerottiA., & RinzivilloS. (2023). Co-design of human-centered, explainable AI for clinical decision support. ACM Transactions on Interactive Intelligent Systems. https://doi.org/10.1145/3587271
PoliJ.-P., OuerdaneW., & PierrardR. (2021). Generation of Textual Explanations in XAI: the Case of Semantic Annotation. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 1–6. IEEE. https://doi.org/10.1109/FUZZ45933.2021.9494589
QianK., DanilevskyM., KatsisY., KawasB., OduorE., PopaL., & LiY. (2021). XNLP: A living survey for XAI research in natural language processing. International Conference on Intelligent User Interfaces, Proceedings IUI, 78–80. https://doi.org/10.1145/3397482.3450728
RasG., van GervenM., & HaselagerP. (2018). Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges. https://doi.org/10.1007/978-3-319-98131-4_2
RoyI., FengB., RoychowdhuryS., RaviS. K., UmretiyaR. V., ReynoldsC., … HoffmanA. (2023). Understanding oxidation of Fe-Cr-Al alloys through explainable artificial intelligence. MRS Communications, 13(1), 82–88. https://doi.org/10.1557/s43579-022-00315-0
SaeedW., & OmlinC. (2021). Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 263(Dl). https://doi.org/10.1016/j.knosys.2023.110273
SanchesP., JansonA., KarpashevichP., NadalC., QuC., Daudén RoquetC., … SasC. (2019). HCI and Affective Health. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–17. New York, NY, USA: ACM. https://doi.org/10.1145/3290605.3300475
Šarčević, A., PintarD., Vranić, M., & KrajnaA. (2022). Cybersecurity Knowledge Extraction Using XAI. Applied Sciences (Switzerland), 12(17). https://doi.org/10.3390/app12178669
SarpS., CatakF. O., KuzluM., CaliU., KusetogullariH., ZhaoY., … GulerO. (2023). An XAI approach for COVID-19 detection using transfer learning with X-ray images. Heliyon, 9(4), e15137. https://doi.org/10.1016/j.heliyon.2023.e15137
SarpS., KuzluM., CaliU., ElmaO., & GulerO. (2021). An Interpretable Solar Photovoltaic Power Generation Forecasting Approach Using An Explainable Artificial Intelligence Tool. 2021 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), (ii), 1–5. IEEE. https://doi.org/10.1109/ISGT49243.2021.9372263
SarpS., KuzluM., WilsonE., CaliU., & GulerO. (2021). The enlightening role of explainable artificial intelligence in chronic wound classification. Electronics (Switzerland), 10(12). https://doi.org/10.3390/electronics10121406
SchoonderwoerdT. A. J., JorritsmaW., NeerincxM. A., & van den BoschK. (2021). Humancentered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of Human Computer Studies, 154, 102684. https://doi.org/10.1016/j.ijhcs.2021.102684
SchwalbeG., & FinzelB. (2023). A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts. Data Mining and Knowledge Discovery. https://doi.org/10.1007/s10618-022-00867-8
SelvarajuR. R., CogswellM., DasA., VedantamR., ParikhD., & BatraD. (2017). Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision, 128(2), 618–626. https://doi.org/10.1109/ICCV.2017.74
ShreeS., ChandrasekaranJ., LeiY., KackerR. N., & KuhnD.R. (2022). DeltaExplainer: A Software Debugging Approach to Generating Counterfactual Explanations. Proceedings -4th IEEE International Conference on Artificial Intelligence Testing, AITest 2022, 103–110. https://doi.org/10.1109/AITest55621.2022.00023
SinghA., SenguptaS., & LakshminarayananV. (2020). Explainable Deep Learning Models in Medical Image Analysis. 1–19. https://doi.org/10.3390/jimaging6060052 SkuppinN., HoffmannE. J., ShiY., & ZhuX.X. (2022). EXPLAINABILITY ANALYSIS OF CNN IN DETECTION OF VOLCANIC DEFORMATION SIGNAL. 5844–5847.
SmithS., PatwaryM., NorickB., LeGresleyP., RajbhandariS., CasperJ., … CatanzaroB. (2022). Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model.
SpeithT. (2022). A Review of Taxonomies of Explainable Artificial Intelligence ( XAI ) Methods. In 2022 ACM Conference on Fairness, Accountability, AndTransparency (FAccT’22). https://doi.org/https://doi.org/10.1145/3531146.35346391
SuhB., YuH., KimH., LeeS., KongS., KimJ. W., & ChoiJ. (2023). Interpretable Deep-Learning Approaches for Osteoporosis Risk Screening and Individualized Feature Analysis Using Large Population-Based Data: Model Development and Performance Evaluation. Journal of Medical Internet Research, 25. https://doi.org/10.2196/40179
ThiemeA., BelgraveD., & DohertyG. (2020). Machine Learning in Mental Health. ACM Transactions on Computer-Human Interaction, 27(5), 1–53. https://doi.org/10.1145/3398069
TjoaE., & GuanC. (2021). A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813. https://doi.org/10.1109/TNNLS.2020.3027314
VieiraC. P., & DigiampietriL.A. (2022). Machine Learning post-hoc interpretability: a systematic mapping study. ACM International Conference Proceeding Series, Par F18047. https://doi.org/10.1145/3535511.3535512
VillarongaE. F., KiesebergP., & LiT. (2018). Humans forget, machines remember: Artificial intelligence and the Right to Be Forgotten. Computer Law and Security Review, 34(2), 304–313. https://doi.org/10.1016/j.clsr.2017.08.007
ViloneG., & LongoL. (2021). Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion, 76(April), 89–106. https://doi.org/10.1016/j.inffus.2021.05.009
WangC., & AnP. (2021). A Mobile Tool that Helps Nonexperts Make Sense of Pretrained CNN by Interacting with Their Daily Surroundings. Extended Abstracts of MobileHCI 2021 -ACM International Conference on Mobile Human-Computer Interaction: Mobile Apart, Mobile Together. https://doi.org/10.1145/3447527.3474873
WangD., YangQ., AbdulA., & LimB.Y. (2019). Designing theory-driven user-centric explainable AI. Conference on Human Factors in Computing Systems -Proceedings, (February). https://doi.org/10.1145/3290605.3300831
WangQ., HuangK., ChandakP., ZitnikM., & GehlenborgN. (2023). Extending the Nested Model for User-Centric XAI: A Design Study on GNN-based Drug Repurposing. IEEE Transactions on Visualization and Computer Graphics, 29(1), 1266–1276. https://doi.org/10.1109/TVCG.2022.3209435
WeberP., CarlK. V., & HinzO. (2023). Applications of Explainable Artificial Intelligence in Finance—a systematic review of Finance, Information Systems, and Computer Science literature. In Management Review Quarterly. Springer International Publishing. https://doi.org/10.1007/s11301-023-00320-0
WellawatteG. P., GandhiH. A., SeshadriA., & WhiteA.D. (2022). A Perspective on Explanations of Molecular Prediction Models. Chemrxiv. https://doi.org/10.1021/acs.jctc.2c01235
XuW. (2019). Toward human-centered AI: A perspective from human-computer interaction. ACM, 26(4), 42–46. https://doi.org/10.1145/3328485
YounisseR., AhmadA., & Abu Al-HaijaQ. (2022). Explaining Intrusion Detection-Based Convolutional Neural Networks Using Shapley Additive Explanations (SHAP). Big Data and Cognitive Computing, 6(4). https://doi.org/10.3390/bdcc6040126
YükselN., Börklü, H. R., SezerH. K., & CanyurtO.E. (2023). Review of artificial intelligence applications in engineering design perspective. Engineering Applications of Artificial Intelligence, 118(April 2022), 105697. https://doi.org/10.1016/j.engappai.2022.105697
ZhouB., KhoslaA., LapedrizaA., OlivaA., & TorralbaA. (2016). Learning Deep Features for Discriminative Localization. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-December, 2921–2929. https://doi.org/10.1109/CVPR.2016.319