**Job Description**
This PhD project focuses on developing human-centered interpretable machine learning, addressing the limitations of existing explainable AI methods and the need to incorporate domain expertise and achieve higher accuracy. The research will develop theory and algorithms for hybrid model selection, leveraging the minimum description length (MDL) principle and interpretable machine learning approaches. The resulting methods will be evaluated on real-world case studies in health care to demonstrate the potential for discovering novel insights from data. The candidate will be embedded in the Explanatory Data Analysis group at the Leiden Institute of Advanced Computer Science.
**Skills & Abilities**
• Strong knowledge of and experience with machine learning, data mining, and statistics
• Knowledge of and experience with information theoretic learning (e.g., the MDL principle)
• Highly motivated to perform foundational data mining research and apply developed methods to real-world applications
• Creative, ‘making things work’ mentality, independent, and communicative team player
• Experienced with writing scientific manuscripts and excellent academic writing skills
• Excellent programming skills (preferably in Python)
• Interested in contributing to educational activities
• Excellent proficiency in English (oral and written)
**Qualifications**
Required Degree(s) in:
• Computer Science
• Statistics
• Artificial Intelligence
• Data Science
• Related field
**Experience**
Other:
• Experienced with writing scientific manuscripts and excellent academic writing skills
Note: We’ve analyzed the actual job post using AI, for more details visit the original job post by clicking on “Apply Now”!
Other similar jobs that might interest you