Is Your AI Learning From the Wrong Crowd?

Key Takeaways

  • Artificial intelligence can be built for malicious purposes or legitimate AI tools can be corrupted by hackers.
  • “Data poisoning” occurs when attackers feed AI bad data to manipulate its output, potentially introducing biases or dangerous inaccuracies.
  • It’s generally safer to trust AI from major, established companies, but always verify the information provided.
  • Critically evaluate AI-generated content: check sources, cross-reference information, and ask if it “sounds right.”

Artificial intelligence, like any tool, can be used for good or ill. Some AI models are intentionally designed for harmful activities, while even well-meaning AI can be tampered with.

Hackers can corrupt AI by “poisoning” it with manipulated data. This tactic aims to alter the AI’s learning dataset, thereby changing what it produces. The goal might be subtle, like instilling biases, or more overt, like generating dangerously incorrect information or harmful suggestions.

Since AI doesn’t inherently know if its use is positive or negative, users can become unwitting victims of cybercrime if they aren’t careful. Ram Shankar Siva Kumar, a Data Cowboy with Microsoft’s red team, shared insights on AI security at the recent RSAC Conference, an event drawing thousands of cybersecurity experts.

Spotting a compromised AI is quite challenging. Kumar advises that while every AI tool has vulnerabilities, you can generally place more trust in major players like OpenAI, Microsoft, and Google. These companies not only have more resources to address issues but also usually have clearer, stated intentions for their AI, according to PCWorld.

Always approach AI-generated advice or instructions with a healthy dose of skepticism. AI can sometimes “hallucinate,” meaning it presents incorrect information as fact. For instance, Google’s AI once mistakenly stated Germany was larger than California by confusing miles and kilometers. A compromised AI could hallucinate in far more treacherous ways or intentionally provide dangerous guidance, perhaps by ignoring safety protocols for medical advice.

When an AI chatbot answers your questions, it’s summarizing information it has found. However, the quality of that summary hinges on its sources, which aren’t always top-tier. It’s wise to review the source material an AI relies upon. AI can take details out of context, misinterpret them, or lack a diverse enough dataset to identify reliable sources.

Think about it like this: if a friend shares some surprising news, you might ask where they heard it to judge the source’s reliability. You should extend this same critical habit to your interactions with AI.

The core message is to keep your critical thinking active. Don’t let an AI’s confident tone sway you. Always ask yourself: does this information actually sound right?

These tips provide a starting point. Continue to protect yourself by regularly cross-referencing information—checking multiple sources to verify what your AI helper tells you—and by knowing who to consult for additional help. It’s also useful to consider the motive behind the creation of any source material.

When you’re less familiar with a topic, it’s crucial to be discerning about who and what you trust. Malicious AI succeeds when users stop thinking critically.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended