**Job Description**
This position is for up to two Ph.D. students in trustworthy machine learning, focusing on cybersecurity, privacy, and verifiability for AI systems, funded by the Wallenberg AI, Autonomous Systems and Software Program (WASP). The role involves developing methods to ensure AI/ML systems are verifiable, robust, secure, privacy-preserving, and ethical, facilitating their reliable and large-scale use in safety-critical and security-critical domains such as healthcare, autonomous driving, AI-native networks, and financial services.
**Skills & Abilities**
• Strong programming skills in Python.
• Experience with at least one popular deep learning library (PyTorch, TensorFlow, Keras, etc.).
• Good verbal and written communication skills in English.
• Strong teamwork and collaboration skills.
• Competency in critical and independent thinking.
**Qualifications**
Required Degree(s) in:
• Computer Science
• Artificial Intelligence
• Machine Learning
• Cybersecurity
• Related subjects (with a minimum of 240 credits, at least 60 of which must be in advanced courses in Machine Learning, Deep Learning, Artificial Intelligence, or Information Security)
**Experience**
Experience Required:
• None (Research Experience)
Other:
• Experience with information security or privacy-preserving machine learning (e.g., thesis work, publications, or internships) will strengthen the application.
Note: We’ve analyzed the actual job post using AI, for more details visit the original job post by clicking on “Apply Now”!