PhD position on attacks against large language models (LLMs)
In this PhD position, the candidate will develop novel ways of attacking large language models (LLMs); demonstrate the societal impact of those attacks; and help to develop defence mechanisms to secure LLMs against these newly developed attacks.
This project will investigate attacks on large language models (LLMs), a major recent development in artificial intelligence that has already seen many integrations into public life. If these LLMs can be triggered into providing malicious output, this may have disastrous consequences, leading to the generation of harmful content, the execution of malicious code on connected devices, or the abuse of limited resources. The idea is to assess the resistance of these models against new attacks, using techniques coming from the domain of AI and optimisation, and develop methods to defend against such harms by leveraging cryptographic approaches. To this end, you will:
1. Investigate how to adapt existing adversarial attacks for image classification and other domains against existing open source LLM (e.g., Llama 3, Phi-3), as well as develop new kinds of attacks, for example based on evolutionary algorithms.
2. Investigate to what extent data poisoning attacks can influence the output of LLM models in security and safety critical infrastructure.
3. Perform the attack under different scenarios and model the impact.
4. Evaluate the impact of such attacks when executed in multi-agentic systems, where the output of one LLM is used as input for another LLM.
5. Design new defense methods, e.g., inspired by cryptography, to prevent such attacks from affecting real-world LLM systems, where we focus on methods that limit the computational overhead to minimize the energy, and therefore environmental cost of such defences.
These research directions will advance the understanding of security vulnerabilities of LLMs, and the prevention of malicious output generation.
Information and application
Are you interested in this position? Please send your application via the 'Apply now' button below before 14 February 2026, and include:
- A Curriculum Vitae, including a list of all courses attended and grades obtained, and, if applicable, a list of publications and references.
- A cover letter (maximum 2 pages A4), emphasising your specific interest, qualifications, and motivations to apply for this position.
- An IELTS-test, Internet TOEFL test (TOEFL-iBT), or a Cambridge CAE-C (CPE). Applicants with a non-Dutch qualification and who have not had secondary and tertiary education in English can only be admitted with an IELTS-test showing a total band score of at least 6.5, internet. TOEFL test (TOEFL-iBT) showing a score of at least 90, or a Cambridge CAE-C (CPE).
For more information regarding this position, you are welcome to contact Luca Mariot via the following email addres: l.mariot@utwente.nl.
The first round of interviews will be held March 5, 2026.
Screening is part of the selection process.
About the department
The candidate will join the Semantics, Cybersecurity & Services SCS group at the university of Twente, under the supervision of Dr. Luca Mariot, Dr. ir. Thijs van Ede, Dr. Jair Santanna and Dr. Florian Hahn
About the organisation
The faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) uses mathematics, electronics and computer technology to contribute to the development of Information and Communication Technology (ICT). With ICT present in almost every device and product we use nowadays, we embrace our role as contributors to a broad range of societal activities and as pioneers of tomorrow's digital society. As part of a tech university that aims to shape society, individuals and connections, our faculty works together intensively with industrial partners and researchers in the Netherlands and abroad, and conducts extensive research for external commissioning parties and funders. Our research has a high profile both in the Netherlands and internationally. It has been accommodated in three multidisciplinary UT research institutes: Mesa+ Institute, TechMed Centre and Digital Society Institute.



