Raymond Evans

Raymond Evans is a security researcher having 10+ years of experience in cyber security. He has been working with companies such as Cybrary to provide students with hands-on training environments. Additionally, Raymond runs CTF events for conferences such as Nolacon, BSides STL, and various National Cyber Security Awareness month events.

His experience involves developing vulnerable environments to educate analysts on emerging CVEs and traditional skill sets. He also has experience in developing and executing tactics in environments to simulate real-world APT TTPs.

Be sure to catch Raymond’s talk at ShowMeCon!

________________________________________________________________________________________________________________

Pentesting Large Language Models: Challenges and Techniques

At the cutting edge of AI advancements, Large Language Models (LLMs) such as GPT-3 and BERT are
transforming a wide array of industries. Yet, this swift integration into various sectors has brought to light
significant security issues, which are the focus of the “Pentesting Large Language Models: Challenges and
Techniques” talk. This discussion aims to thoroughly examine the specific vulnerabilities found in LLMs
and underscore the importance of robust pentesting methods to safeguard their security and functionality.

The session will present a focused overview of Large Language Models, highlighting their architectural
design, range of applications, and their critical role in the current AI domain. It will underscore the vital aspect of security within AI, especially due to the inherent risks in LLMs, such as the potential for data
corruption, model inversion attacks, and threats to data security.

Furthermore, the talk will explore specialized pentesting techniques for LLMs, covering both automated
and manual approaches. This segment includes real-world case studies demonstrating the tangible
impacts and consequences of security breaches in LLMs, effectively linking theoretical concepts with
practical scenarios.

The presentation will address best practices in mitigating risks associated with LLMs. It will emphasize
the importance of secure development, deployment, and ongoing surveillance. The talk will also provide
insights into the future challenges of AI security and the growing need for sophisticated pentesting
strategies.