Customise Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorised as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyse the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customised advertisements based on the pages you visited previously and to analyse the effectiveness of the ad campaigns.

No cookies to display.

UK Cybersecurity Agency Alerts to Rising Threat of Chatbot Manipulation Attacks

National Cyber Security Centre Issues Warning About Potential Risks from ‘Prompt Injection’ Attacks

London, UK – The National Cyber Security Centre (NCSC) has raised alarms over the increasing risk of cyberattacks targeting chatbots, warning that hackers could manipulate chatbot technology using “prompt injection” attacks. These attacks exploit the way chatbots interpret input prompts, potentially leading to data theft, scams, and other malicious activities.

The NCSC described prompt injection as a process where an attacker crafts specific inputs that deceive the chatbot into functioning in unintended ways. By manipulating the chatbot’s responses, cybercriminals could gain unauthorized access to sensitive information or cause harm to users.

Chatbots, powered by artificial intelligence (AI), are commonly used for handling customer queries in various sectors, including online banking, retail, and services. They are designed to replicate human-like interactions and can process large datasets to provide automated responses. However, the NCSC’s warning highlights that, while convenient, these tools are also vulnerable to exploitation.

Leave a Reply

Your email address will not be published.