Home Technology ChatGPT’s Arrival on iPhone Sparks Reprise of Privacy Concerns

ChatGPT’s Arrival on iPhone Sparks Reprise of Privacy Concerns

by news

Since OpenAI introduced ChatGPT, privacy advocates have warned consumers about the potential threat to privacy posed by generative AI apps. The arrival of a ChatGPT app in the Apple App Store has ignited a fresh round of caution.
“[B]efore you jump headfirst into the app, beware of getting too personal with the bot and putting your privacy at risk,” warned Muskaan Saxena in Tech Radar.
The iOS app comes with an explicit tradeoff that users should be aware of, she explained, including this admonition: “Anonymized chats may be reviewed by our AI trainer to improve our systems.”
Anonymization, though, is no ticket to privacy. Anonymized chats are stripped of information that can link them to particular users. “However, anonymization may not be an adequate measure to protect consumer privacy because anonymized data can still be re-identified by combining it with other sources of information,” Joey Stanford, vice president of privacy and security at Platform.sh, a maker of a cloud-based services platform for developers based in Paris, told TechNewsWorld.
“It’s been found that it’s relatively easy to de-anonymize information, especially if location information is used,” explained Jen Caltrider, lead researcher for Mozilla’s Privacy Not Included project.
“Publicly, OpenAI says it isn’t collecting location data, but its privacy policy for ChatGPT says they could collect that data,” she told TechNewsWorld.
Nevertheless, OpenAI does warn users of the ChatGPT app that their information will be used to train its large language model. “They’re honest about that. They’re not hiding anything,” Caltrider said.
Caleb Withers, a research assistant at the Center for New American Security, a national security and defense think tank in Washington, D.C., explained that if a user types their name, place of work, and other personal information into a ChatGPT query, that data will not be anonymized.
“You have to ask yourself, ‘Is this something I would say to an OpenAI employee?’” he told TechNewsWorld.
OpenAI has stated that it takes privacy seriously and implements measures to safeguard user data, noted Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.
“However, it’s always a good idea to review the specific privacy policies and practices of any service you use to understand how your data is handled and what protections are in place,” he told TechNewsWorld.

As dedicated to data security as an organization might be, vulnerabilities might exist that could be exploited by malicious actors, added James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
“It’s always important to be cautious and consider the necessity of sharing sensitive information to ensure that your data is as secure as possible,” he told TechNewsWorld.
“Protecting your privacy is a shared responsibility between users and the companies that collect and use their data, which is documented in those long and often unread End User License Agreements,” he added.
McQuiggan noted that users of generative AI apps have been known to insert sensitive information such as birthdays, phone numbers, and postal and email addresses into their queries. “If the AI system is not adequately secured, it can be accessed by third parties and used for malicious purposes such as identity theft or targeted advertising,” he said.
He added that generative AI applications could also inadvertently reveal sensitive information about users through their generated content. “Therefore,” he continued, “users must know the potential privacy risks of using generative AI applications and take the necessary steps to protect their personal information.”
Unlike desktops and laptops, mobile phones have some built-in security features that can curb privacy incursions by apps running on them.
However, as McQuiggan points out, “While some measures, such as application permissions and privacy settings, can provide some level of protection, they may not thoroughly safeguard your personal information from all types of privacy threats as with any application loaded on the smartphone.”
Vena agreed that built-in measures like app permissions, privacy settings, and app store regulations offer some level of protection. “But they may not be sufficient to mitigate all privacy threats,” he said. “App developers and smartphone manufacturers have different approaches to privacy, and not all apps adhere to best practices.”
Even OpenAI’s practices vary from desktop to mobile phone. “If you’re using ChatGPT on the website, you have the ability to go into the data controls and opt out of your chat being used to improve ChatGPT. That setting doesn’t exist on the iOS app,” Caltrider noted.
Caltrider also found the permissions used by OpenAI’s iOS app a bit fuzzy, noting that “In the Google Play Store, you can check and see what permissions are being used. You can’t do that through the Apple App Store.”
She warned users about depending on privacy information found in app stores. “The research that we’ve done into the Google Play Store safety information shows that it’s really unreliable,” she observed.

“Research by others into the Apple App Store shows it’s unreliable, too,” she continued. “Users shouldn’t trust the data safety information they find on app pages. They should do their own research, which is hard and tricky.”
“The companies need to be better at being honest about what they’re collecting and sharing,” she added. “OpenAI is honest about how they’re going to use the data they collect to train ChatGPT, but then they say that once they anonymize the data, they can use it in lots of ways that go beyond the standards in the privacy policy.”
Stanford noted that Apple has some policies in place that can address some of the privacy threats posed by generative AI apps. They include:
However, he acknowledged, “These measures may not be enough to prevent generative AI apps from creating inappropriate, harmful, or misleading content that could affect users’ privacy and security.”
“OpenAI is just one company. There are several creating large language models, and many more are likely to crop up in the near future,” added Hodan Omaar, a senior AI policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy, in Washington, D.C.
“We need to have a federal data privacy law to ensure all companies adhere to a set of clear standards,” she told TechNewsWorld.
“With the rapid growth and expansion of artificial intelligence,” added Caltrider, “there definitely needs to be solid, strong watchdogs and regulations to keep an eye out for the rest of us as this grows and becomes more prevalent.”
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Please sign in to post or reply to a comment. New users create a free account.

Should technology play a larger role in officiating sports events?
– select up to two –

Loading ... Loading …

Yes – to help ensure fairness and objectivity.
Yes – but humans must always have the final say.
No – using tech disrupts the natural game flow.
No – the chance of human error adds excitement to sports.
Doesn’t matter to me, I don’t watch sports.
Ubuntu 23.04 ‘Lunar Lobster’ Lands With Newly Minted Cinnamon Desktop Flavor
Sonos Bets on Spatial Audio as a Key Brand Differentiator
Mac Mini, MacBook Pro Refreshed With Latest Apple Silicon
Are Mainframes an Indicator of Banking Reliability?
Could Bartenders Close the Growing Tech Skills Gap in Cybersecurity?
Personal Data Harvesting and How To Reduce Your Digital Footprint
Business Conditions Prime for More Open-Source Contributors
Clickbait News Sites Turn to AI for Content
The AI Revolution Is at a Tipping Point
EdTech Developer’s Study Game Approach Aces Med School Testing Curve
Researchers Instantly Crack Simple Passwords With AI
HP Affirms ‘Better Together’ at Its Amplify Event
Digital Health Care Flourishing Despite Legal, Logistical Hurdles
Leverage the Power of Data To Monitor Home Energy Efficiency
Tips To Help Mask Your Identity Online
Unresolved Conflicts Slow eSIM Upgrade Path to Better IoT Security
Generative AI Is Here: Forrester Offers Tech Execs Tips on Next Steps
Lazarus Hackers’ Linux Malware Linked to 3CX Supply-Chain Attack
Leaky Pet App Dilemma Can Lead to Serious Cybersecurity Problems
New Distro Makes Running Arch Linux Very ‘Cachy’
FBI Issues Warning About ‘Juice Jacking’ at Public USB Charging Stations
AnkerWork SR500 Speakerphone: Near Nirvana for PC Use, Phones Heck No
Is ChatGPT Smart Enough To Practice Mental Health Therapy?
Google Invites Public To Test Drive Its AI Chatbot Bard
Lenovo Builds a Workstation James Bond Would Love
Bark and Calix Partner To Combat Cyberbullying
Social Media Fueled the Run on Silicon Valley Bank: Study
DARPA Moves Forward With Project To Revolutionize Satellite Communication
Gen AI and AR/VR: Unintended Consequences, Unproven Mainstream Appeal
Mozilla Releases Gift Guide With Privacy in Mind
3 Big Generative AI Problems Yet To Be Addressed
Meta Lowers Legal Hammer on Law Enforcement Data Scraper
Study Finds EV Battery Replacement Rare, Most Covered by Warranty
Why Nvidia Is Winning the Race To Dominate the Metaverse
Health Features Could Be in AirPods’ Future
Female Army Veteran Uses Tech To Help Create a Better Future
Copyright 1998-2023 ECT News Network, Inc. All Rights Reserved.
Enter your Username and Password to sign in.


Related Posts