AI Trust Portal

Report a vulnerability

AI-UI

AI-UI is a leading AI company based in Germany, specializing in secure and customer-centric AI solutions. The expert team at AI-UI delivers customized AI solutions to optimize business processes and drive growth across various industries, including academia, industry, trading, and development. Clients appreciate AI-UI's reliability, flexibility, and problem-solving capabilities. Interested parties are encouraged to contact AI-UI to discover how AI can transform their business and unlock the full potential of their data.

info@ai-ui.ai

ai-ui.ai

Wahlweise is an AI solution designed to simplify understanding election programs. It provides users with easy access to political topics, allowing them to ask questions, receive clear answers, and identify which party matches their preferences. The app is free to use and ensures data security.

SplxAI Safety and Security Certificate®

Issued on: 05-07-2024

Request access

Prompt Injection

Risk

Status

Context Leakage

Context leakage in AI chatbots refers to the unintentional exposure of sensitive internal documents, intellectual property, and the system prompt of the chatbot itself. The system prompt is crucial as it guides the chatbot’s behavior and responses. If leaked, it can provide detailed insights into the chatbot’s operational parameters and proprietary algorithms. Such leaks are critical because they can allow competitors to replicate proprietary solutions, leading to significant competitive disadvantages. Additionally, context leakage can serve as a gateway for further adversarial activities, amplifying the overall risks to the organization and compromising the security and integrity of the chatbot system.

Social Engineering

Social engineering through chatbots is a high-risk threat, as it exploits the trust and naivety of average users. Attackers can manipulate the chatbot to deceive users into divulging personal or sensitive information. This method is particularly dangerous because it leverages human psychology, making it one of the easiest yet most effective ways to harm users and compromise their security.

Jailbreak

Jailbreaking involves manipulating the chatbot to bypass its preset operational constraints, posing a high risk. This vulnerability can open the door to various malicious activities, allowing attackers to exploit the chatbot for unintended purposes. Once the chatbot is compromised, it can be used to disseminate harmful information or perform unauthorized actions, significantly jeopardizing the security and integrity of the system.

Model leakage

Model leakage is a critical security threat in AI chatbots, involving the unintended exposure of the underlying model’s architecture, parameters, training data, personal user data as well as proprietary company data. This can occur through sophisticated prompt injection attacks where malicious actors manipulate the chatbot to reveal sensitive details about its construction and functioning. Model leakage can lead to significant competitive disadvantages, as adversarials might replicate or manipulate the exposed model for their own purposes and increase the risk for further adversarial attacks.

Off-Topic

Risk

Status

Intentional misuse

Intentional misuse of chatbots by users can lead to a medium-level risk involving unexpected behaviors and security threats. Such misuse can strain resources, causing denial of service for legitimate users. Additionally, it introduces security risks as unforeseen prompts may lead to unintended and potentially harmful responses from the chatbot.

Bias

Bias in AI chatbots represents a high risk issue, as it can lead to the dissemination of prejudiced or discriminatory information. When a chatbot exhibits bias, it can inadvertently reflect and amplify societal prejudices, impacting the user experience negatively. This can result in user alienation and loss of trust, as well as potential legal repercussions for discriminatory behavior. Additionally, biased response can tarnish the brand’s reputation and lead to public backlash, emphasizing the need for vigilant monitoring and mitigation strategies to ensure fair and unbiased interactions.

Harmful content

Harmful content in GenAI chatbots is a critical risk that involves the generation of abusive, offensive, or otherwise damaging material during interactions. This issue arises when the chatbot engages in discussions that deviate from its intended purpose, potentially leading to severe user distress and reputational damage for the brand. Harmful content can include hate speech, harassment, or any form of toxic language that can alienate users and create a hostile environment. Such incidents not only degrade the user experience but can also escalate into broader public relations crises and legal challenges if the chatbot's harmful behavior is widely reported.

Competition infiltration

Competition infiltration occurs when users are redirected to a competitor's services, representing a medium risk. This can result in direct revenue loss as potential customers are diverted away. The risk extends to potential leaks of competitive intelligence, where sensitive business strategies might be inadvertently exposed.

Exploiting rail aggression limit

Exploiting the chatbot's aggression limits is a high-risk issue. Over-aggressiveness can result in disabled features and a decrease in user engagement, leading to potential revenue loss. Conversely, insufficient aggressiveness can make the chatbot vulnerable to adversarial attacks and misuse, compromising its effectiveness and security.

Off-topic discussion

Off-topic discussions can steer the conversation away from the user's original intent, leading to a medium risk of poor user experience. When the chatbot engages in irrelevant dialogue, it fails to address the user's needs effectively, resulting in frustration and decreased satisfaction with the service.

Hallucination

Risk

Status

URL Check

This risk refers to the chatbot providing inaccurate or fabricated references, URLs, or titles, posing a medium risk. When a chatbot generates false citations or links, it can mislead users and spread misinformation. This not only deteriorates the user experience but also can have serious legal and reputational consequences, if the misinformation leads to significant harm or public backlash.

RAG Precision

RAG (Retrieval-Augmented Generation) precision in GenAI chatbots is a high-risk issue that involves inaccuracies in the generated responses due to errors in the retrieval process. When a chatbot employs RAG, it combines information retrieval with generative capabilities to provide responses. If the retrieval mechanism fetches irrelevant or incorrect information, the generated content can be misleading or factually incorrect. This poses significant risks, especially in domains requiring high accuracy, such as healthcare or finance, leading to potential harm to users and liability for the organization.

powered by