EditorDavid

Is Everyone Using AI to Cheat Their Way Through College?

Chungin Lee used ChatGPT to help write the essay that got him into Columbia University — and then "proceeded to use generative artificial intelligence to cheat on nearly every assignment," reports New York magazine's blog Intelligencer: As a computer-science major, he depended on AI for his introductory programming classes: "I'd just dump the prompt into ChatGPT and hand in whatever it spat out." By his rough math, AI wrote 80 percent of every essay he turned in. "At the end, I'd put on the finishing touches. I'd just insert 20 percent of my humanity, my voice, into it," Lee told me recently... When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, "It's the best place to meet your co-founder and your wife." He eventually did meet a co-founder, and after three unpopular apps they found success by creating the "ultimate cheat tool" for remote coding interviews, according to the article. "Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.)" The article ends with Lee and his co-founder raising $5.3 million from investors for one more AI-powered app, and Lee says they'll target the standardized tests used for graduate school admissions, as well as "all campus assignments, quizzes, and tests. It will enable you to cheat on pretty much everything." Somewhere along the way Columbia put him on disciplinary probation — not for cheating in coursework, but for creating the apps. But "Lee thought it absurd that Columbia, which had a partnership with ChatGPT's parent company, OpenAI, would punish him for innovating with AI." (OpenAI has even made ChatGPT Plus free to college students during finals week, the article points out, with OpenAI saying their goal is just teaching students how to use it responsibly.) Although Columbia's policy on AI is similar to that of many other universities' — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn't know a single student at the school who isn't using AI to cheat. To be clear, Lee doesn't think this is a bad thing. "I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating," he said... In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. The article points out ChatGPT's monthly visits increased steadily over the last two years — until June, when students went on summer vacation. "College is just how well I can use ChatGPT at this point," a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.... It isn't as if cheating is new. But now, as one student put it, "the ceiling has been blown off." Who could resist a tool that makes every assignment easier with seemingly no consequences? After using ChatGPT for their final semester of high school, one student says "My grades were amazing. It changed my life." So she continued used it in college, and "Rarely did she sit in class and not see other students' laptops open to ChatGPT." One ethics professor even says "The students kind of recognize that the system is broken and that there's not really a point in doing this." (Yes, students are even using AI to cheat in ethics classes...) It's not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students' essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.

Sea Levels Rose Faster Than Expected Last Year. Blame Global Warming - But What Happens Next?

Though global sea levels "varied little" for the 2,000 years before the 20th century, CNN reports that sea levels then "started rising and have not stopped since — and the pace is accelerating." And sea level rise "was unexpectedly high last year, according to a recent NASA analysis of satellite data." More concerning, however, is the longer-term trend. The rate of annual sea level rise has more than doubled over the past 30 years, resulting in the global sea level increasing 4 inches since 1993. "It's like we're putting our foot on the gas pedal," said Benjamin Hamlington, a research scientist in the Sea Level and Ice Group at NASA's Jet Propulsion Laboratory. While other climate signals fluctuate, global sea level has a "persistent rise," he told CNN. It spells trouble for the future. Scientists have a good idea how much average sea level will rise by 2050 — around 6 inches globally, and as much as 10 to 12 inches in the US. Past 2050, however, things get very fuzzy. "We have such a huge range of uncertainty," said Dirk Notz, head of sea ice at the University of Hamburg. "The numbers are just getting higher and higher and higher very quickly." The world could easily see an extra 3 feet of sea level rise by 2100, he told CNN; it could also take hundreds of years to reach that level. Scientists simply don't know enough yet to project what will happen. What scientists are crystal clear about is the reason for the rise: human-caused global warming. Oceans absorb roughly 90% of the excess heat primarily produced by burning fossil fuels, and as water heats up it expands. Heat in the oceans and atmosphere is also driving melting of the Greenland and Antarctic ice sheets, which together hold enough fresh water to raise global sea levels by around 213 feet. Melting ice sheets have driven roughly two-thirds of longer-term sea level rise, although last year — the planet's hottest on record — the two factors flipped, making ocean warming the main driver. [SciTechDaily reports that between 2021 and 2023 the Antarctica ice sheet actually showed an overall increase in mass which exerted a negative contribution to sea level rise.] It's likely that an increase of about 3 feet is already locked in, Notz said, because "we have pushed the system too hard." The big question is, how quickly will it happen? Ice sheets are the biggest uncertainty, as it's not clear how fast they'll react as the world heats up — whether they'll melt steadily or reach a tipping point and rapidly collapse... [I]t's still unclear how processes may unfold over the next decades and centuries. Antarctica is "the elephant in the room," he said. Alarming changes are unfolding on this vast icy continent, which holds enough water to raise levels by 190 feet. Notz describes the ice sheet as an "awakening giant:" It takes a long time to wake up but once awake, "it's very, very difficult to put it back to sleep." The article notes that U.S. coastlines "are tracking above global average and toward the upper end of climate model projections, NASA's Hamlington said." (The state of Louisiana has one of the highest rates of land loss in the world, with some places experiencing nearly 4x the global rate of relative sea level rise.) But it's not just a problem for America. "Over the next three decades, islands such as Tuvalu, Kiribati and Fiji will experience at least 6 inches of sea level rise even if the world reduces planet-heating pollution, according to NASA.... "Entire villages in Fiji have been formally relocated," said Fijian activist George Nacewa, from climate group 350.org, "the incoming tides are flooding our roads and inundating our crops." However, if the pace accelerates rapidly, "it will be very, very difficult to adapt to, because things unfold too quickly," he said. "Humans still have control over how fast sea level rises over the next decades and centuries by cutting emissions, Notz noted." Thanks to long-time Slashdot reader RoccamOccam for sharing the news.

'I Broke Up with Google Search. It was Surprisingly Easy.'

Inspired by researchers who'd bribed people to use Microsoft's Bing for two weeks (and found some wanted to keep using it), a Washington Post tech columnist also tried it — and reported it "felt like quitting coffee." "The first few days, I was jittery. I kept double searching on Google and DuckDuckGo, the non-Google web search engine I was using, to check if Google gave me better results. Sometimes it did. Mostly it didn't." "More than two weeks into a test of whether I love Google search or if it's just a habit, I've stopped double checking. I don't have Google FOMO..." I didn't do a fancy analysis into whether my search results were better with Google or DuckDuckGo, whose technology is partly powered by Bing. The researchers found our assessment of search quality is based on vibes. And the vibes with DuckDuckGo are perfectly fine. Many dozens of readers told me about their own satisfaction with non-Google searches... For better or worse, DuckDuckGo is becoming a bit more Google-like. Like Google, it has ads that are sometimes misleading or irrelevant. DuckDuckGo and Bing also are mimicking Google's makeover from a place that mostly pointed you to the best links online to one that never wants you to leave Google... [DuckDuckGo] shows you answers to things like sports results and AI-assisted replies, though less often than Google does. (You can turn off AI "instant answers" in DuckDuckGo.) Answers at the top of search results pages can be handy — assuming they're not wrong or scams — but they have potential trade-offs. If you stop your search without clicking to read a website about sports news or gluten intolerance, those sites could die. And the web gets worse. DuckDuckGo says that people expect instant answers from search results, and it's trying to balance those demands with keeping the web healthy. Google says AI answers help people feel more satisfied with their search results and web surfing. DuckDuckGo has one clear advantage over Google: It collects far less of your data. DuckDuckGo doesn't save what I search... My biggest wariness from this search experiment is like the challenge of slowing climate change: Your choices matter, but maybe not that much. Our technology has been steered by a handful of giant technology companies, and it's difficult for individuals to alter that. The judge in the company's search monopoly case said Google broke the law by making it harder for you to use anything other than Google. Its search is so dominant that companies stopped trying hard to out-innovate and win you over. (AI could upend Google search. We'll see....) Despite those challenges, using Google a bit less and smaller alternatives more can make a difference. You don't have to 100 percent quit Google. "Your experiment confirms what we've said all along," Google responded to the Washington Post. "It's easy to find and use the search engine of your choice." Although the Post's reporter also adds that "I'm definitely not ditching other company internet services like Google Maps, Google Photos and Gmail." They write later that " You'll have to pry YouTube out of my cold, dead hands" and "When I moved years of emails from Gmail to Proton Mail, that switch didn't stick."

How A Simple Question Tripped Up a North Korean Spy Interviewing for an IT Job

Long-time Slashdot reader smooth wombat writes: Over the past year there have been stories about North Korean spies unknowingly or knowingly being hired to work in western companies. During an interview by Kraken, a crypto exchange, the interviewers became suspicious about the candidate. Instead of cutting off the interview, Kraken decided to continue the candidate through the hiring process to gain more information. One simple question confirmed the user wasn't who they said they were and even worse, was a North Korean spy. Would-be IT worker "Steven Smith" already had an email address on a "do-not-hire" list from law enforcement agencies, according to CBS News. And an article in Fortune magazine says Kraken asked him to speak to a recruiter and take a technical-pretest, and "I don't think he actually answered any questions that we asked him," according to its chief security officer Nick Percoco — even though the application was claiming 11 years of experience as a software engineer at U.S.-based companies: The interview was scheduled for Halloween, a classic American holiday—especially for college students in New York—that Smith seemed to know nothing about. "Watch out tonight because some people might be ringing your doorbell, kids with chain saws," Percoco said, referring to the tradition of trick or treating. "What do you do when those people show up?" Smith shrugged and shook his head. "Nothing special," he said. Smith was also unable to answer simple questions about Houston, the town he had supposedly been living in for two years. Despite having listed "food" as an interest on his résumé, Smith was unable to come up with a straight answer when asked about his favorite restaurant in the Houston area. He looked around for a few seconds before mumbling, "Nothing special here...." The United Nations estimates that North Korea has generated between $250 million to $600 million per year by tricking overseas firms to hire its spies. A network of North Koreans, known as Famous Chollima, was behind 304 individual incidents last year, cybersecurity company CrowdStrike reported, predicting that the campaigns will continue to grow in 2025. During a report CBS News actually aired footage of the job interview with the "suspected member of Kim Jong Un's cyberarmy." "Some people might call it trolling as well," one company official told the news outlet. "We call it security research." (And they raise the disturbing possibility that another IT company might very well have hired "Steven Smith"...) CBS also spoke to CrowdStrike co-founder Dmitri Alperovitch, who says the problem increased with remote work, as is now fueling a state-run weapons program. "It's a huge problem because these people are not just North Koreans — they're North Koreans working for their munitions industry department, they're working for the Korean People's Army." (He says later the results of their work are "going directly" to North Korea's nuclear and ballistic missile programs.) And when CBS notes that the FBI issued a wanted poster of alleged North Korean agents and arrested Americans hosting laptop farms in Arizona and Tennesse ("computer hubs inside the U.S. that conceal the cybercriminals real identities"), Alperovitch says "They cannot do this fraud without support here in America from witting or unwitting actors. So they have hired probably hundreds of people..." CBS adds that FBI officials say "the IT worker scene is expanding worldwide."

More US Airports are Scanning Faces. But a New Bill Could Limit the Practice

An anonymous reader shared this repost from the Washington Post: It's becoming standard practice at a growing number of U.S. airports: When you reach the front of the security line, an agent asks you to step up to a machine that scans your face to check whether it matches the face on your identification card. Travelers have the right to opt out of the face scan and have the agent do a visual check instead — but many don't realize that's an option. Sens. Jeff Merkley (D-Oregon) and John Neely Kennedy (R-Louisiana) think it should be the other way around. They plan to introduce a bipartisan bill that would make human ID checks the default, among other restrictions on how the Transportation Security Administration can use facial recognition technology. The Traveler Privacy Protection Act, shared with the Tech Brief on Wednesday ahead of its introduction, is a narrower version of a 2023 bill by the same name that would have banned the TSA's use of facial recognition altogether. This one would allow the agency to continue scanning travelers' faces, but only if they opt in, and would bar the technology's use for any purpose other than verifying people's identities. It would also require the agency to immediately delete the scans of general boarding passengers once the check is complete. "Facial recognition is incredibly powerful, and it is being used as an instrument of oppression around the world to track dissidents whose opinion governments don't like," Merkley said in a phone interview Wednesday, citing China's use of the technology on the country's Uyghur minority. "It really creates a surveillance state," he went on. "That is a massive threat to freedom and privacy here in America, and I don't think we should trust any government with that power...." [The TSA] began testing face scans as an option for people enrolled in "trusted traveler" programs, such as TSA PreCheck, in 2021. By 2022, the program quietly began rolling out to general boarding passengers. It is now active in at least 84 airports, according to the TSA's website, with plans to bring it to more than 400 airports in the coming years. The agency says the technology has proved more efficient and accurate than human identity checks. It assures the public that travelers' face scans are not stored or saved once a match has been made, except in limited tests to evaluate the technology's effectiveness. The bill would also bar the TSA from providing worse treatment to passengers who refuse not to participate, according to FedScoop, and would also forbid the agency from using face-scanning technology to target people or conduct mass surveillance: "Folks don't want a national surveillance state, but that's exactly what the TSA's unchecked expansion of facial recognition technology is leading us to," Sen. Jeff Merkley, D-Ore., a co-sponsor of the bill and a longtime critic of the government's facial recognition program, said in a statement... Earlier this year, the Department of Homeland Security inspector general initiated an audit of TSA's facial recognition program. Merkley had previously led a letter from a bipartisan group of senators calling for the watchdog to open an investigation into TSA's facial recognition plans, noting that the technology is not foolproof and effective alternatives were already in use.

High Tariffs Become 'Real' For Adafruit - With Their First $36K Bill Just For Import Duties

Adafruit's managing director Phillip Torrone is also long-time Slashdot reader ptorrone. He stopped by Thursday to share what happened after a large portion of a recent import was subjected to a 125% +20% +25% import markup... We're no stranger to tariff bills, although they have definitely ramped up over the last two months. However, this is our first "big bill"... Unlike other taxes like sales tax where we collect on behalf of the state and then submit it back at the end of the month — or income taxes, where we only pay if we are profitable — tariff taxes are paid before we sell any of the products. And they're due within a week of receipt, which has a big impact on cash flow. In this particular case, we're buying from a vendor, not a factory, so we can't second-source the items. (And these particular products we couldn't manufacture ourselves even if we wanted to, since the vendor has well-deserved IP protections). And the products were booked & manufactured many months ago, before the tariffs were in place. Since they are electronics products/components, there's a chance we may be able to request reclassification on some items to avoid the 125% "reciprocal" tariff, but there's no assurance that it will succeed, and even if it does, it is many, many months until we could see a refund. We'll have to increase the prices on some of these products. But we're not sure if people will be willing to pay the higher cost, so we may well be "stuck" with unsellable inventory — that we have already paid a large fee on... Their blog post even includes a photo of the DHL customs invoice with the five-digit duty fee... Share your own stories and experiences in the comments. Any other Slashdot readers being affected by the new U.S. tariffs?

Google Will Pay $1.4 Billion to Texas to Settle Claims It Collected User Data Without Permission

Google will pay $1.4 billion to the state of Texas, reports the Associated Press, "to settle claims the company collected users' data without permission, the state's attorney general announced Friday." Attorney General Ken Paxton described the settlement as sending a message to tech companies that he will not allow them to make money off of "selling away our rights and freedoms." "In Texas, Big Tech is not above the law." Paxton said in a statement. "For years, Google secretly tracked people's movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won...." The state argued Google was "unlawfully tracking and collecting users' private data." Paxton claimed, for example, that Google collected millions of biometric identifiers, including voiceprints and records of face geometry, through such products and services as Google Photos and Google Assistant. Google spokesperson José Castañeda said the agreement settles an array of "old claims," some of which relate to product policies the company has already changed. "We are pleased to put them behind us, and we will continue to build robust privacy controls into our services," he said in a statement. The company also clarified that the settlement does not require any new product changes. Google's settlement with Texas "far surpasses any other state's claims for similar violations," according to a statement from their attorney general's office. "To date, no state has attained a settlement against Google for similar data-privacy violations greater than $93 million. Even a multistate coalition that included forty states secured just $391 million — almost a billion dollars less than Texas's recovery." The statement calls the $1.375 billion settlement "a major win for Texans' privacy" that "tells companies that they will pay for abusing our trust."

Has Meta Figured Out How to Monetize AI - By Using It For Targeted Advertising?

Yahoo Finance reports that Mark Zuckerberg made bold predictions for investors on Meta's earnings call this week — about advertisers. "AI has already made us better at targeting and finding the audiences that will be interested in their products than many businesses are themselves," Zuck said, "and that keeps improving..." "If we deliver on this vision, then over the coming years, I think that the increased productivity from AI will make advertising a meaningfully larger share of global GDP than it is today..." If investors are still searching for answers to nagging questions about how massive AI investments will pay off, Zuckerberg provided the clearest reply yet: It will strengthen our core business. In fact, it is our business... On what many believe to be the cusp of an economic downturn, Meta isn't pitching its AI developments as an add-on to its operations, but as something central to its core proposition of targeted advertising... "While Meta's investments in GenAI have spooked certain investors who continue to question the return on these investments, we saw further signs of GenAI monetization in the firm's ad business," wrote Morningstar equity analyst Malik Ahmed Khan in a note on Thursday. In a powerful showing, coming after Alphabet's own impressive results, Meta noted that a new ads recommendation model it's testing for Reels has already boosted conversion rates by 5%. And nearly one-third of advertisers were using AI creative tools in the past quarter. For Zuckerberg, the enhancements AI offers to finding the right consumers and providing measurable results strengthen the case for boosting capacity and for a revamped model of advertising's scope. And with the company set to invest upwards of $70 billion toward its AI opportunity this year, the bet is not all about ads, of course. Zuckerberg outlined four other areas of focus for its AI efforts: business messaging, Meta AI, AI devices, and more engaging experiences. Meta's efforts can also be viewed as an ambitious play to take on its rivals across tech's legacy and emerging platforms. As John Blackledge, senior analyst at TD Cowen, said in a note on Thursday, the AI opportunities Zuckerberg outlined are about "ultimately taking on Google search, iPhone and ChatGPT all at once." In the pre-AI world, "Businesses used to have to generate their own ad creative and define what audiences they wanted to reach," Zuckerberg told Meta's investors this week. And by Friday's closing, Meta's stock had jumped 12.6% over its value Wednesday morning, leading Yahoo Finance to conclude that Wall Street "appears to be buying into" Zuckerberg's vision.

Class Action Accuses Toyota of Illegally Sharing Drivers' Data

"A federal class action lawsuit filed this week in Texas accused Toyota and an affiliated telematics aggregator of unlawfully collecting drivers' information and then selling that data to Progressive," reports Insurance Journal: The lawsuit alleges that Toyota and Connected Analytic Services (CAS) collected vast amounts of vehicle data, including location, speed, direction, braking and swerving/cornering events, and then shared that information with Progressive's Snapshot data sharing program. The class action seeks an award of damages, including actual, nominal, consequential damages, and punitive, and an order prohibiting further collection of drivers' location and vehicle data. Florida man Philip Siefke had bought a new Toyota RAV4 XLE in 2021 "equipped with a telematics device that can track and collect driving data," according to the article. But when he tried to sign up for insurance from Progressive, "a background pop-up window appeared, notifying Siefke that Progressive was already in possession of his driving data, the lawsuit says. A Progressive customer service representative explained to Siefke over the phone that the carrier had obtained his driving data from tracking technology installed in his RAV4." (Toyota told him later he'd unknowingly signed up for a "trial" of the data sharing, and had failed to opt out.) The lawsuit alleges Toyota never provided Siefke with any sort of notice that the car manufacture would share his driving data with third parties... The lawsuit says class members suffered actual injury from having their driving data collected and sold to third parties including, but not limited to, damage to and diminution in the value of their driving data, violation of their privacy rights, [and] the likelihood of future theft of their driving data. The telemetry device "can reportedly gather information about location, fuel levels, the odometer, speed, tire pressure, window status, and seatbelt status," notes CarScoop.com. "In January, Texas Attorney General Ken Paxton started an investigation into Toyota, Ford, Hyundai, and FCA..." According to plaintiff Philip Siefke from Eagle Lake, Florida, Toyota, Progressive, and Connected Analytic Services collect data that can contribute to a "potential discount" on the auto insurance of owners. However, it can also cause insurance premiums to be jacked up. The plaintiff's lawyer issued a press release: Despite Toyota claiming it does not share data without the express consent of customers, Toyota may have unknowingly signed up customers for "trials" of sharing customer driving data without providing any sort of notice to them. Moreover, according to the lawsuit, Toyota represented through its app that it was not collecting customer data even though it was, in fact, gathering and selling customer information. We are actively investigating whether Toyota, CAS, or related entities may have violated state and federal laws by selling this highly sensitive data without adequate disclosure or consent... If you purchased a Toyota vehicle and have since seen your auto insurance rates increase (or been denied coverage), or have reason to believe your driving data has been sold, please contact us today or visit our website at classactionlawyers.com/toyota-tracking. On his YouTube channel, consumer protection attorney Steve Lehto shared a related experience he had — before realizing he wasn't alone. "I've heard that story from so many people who said 'Yeah, I I bought a brand new car and the salesman was showing me how to set everything up, and during the setup process he clicked Yes on something.' Who knows what you just clicked on?!" Thanks to long-time Slashdot reader sinij for sharing the news.

After Reddit Thread on 'ChatGPT-Induced Psychosis', OpenAI Rolls Back GPT4o Update

Rolling Stone reports on a strange new phenomenon spotted this week in a Reddit thread titled "Chatgpt induced psychosis." The original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model "gives him the answers to the universe." Having read his chat logs, she only found that the AI was "talking to him as if he is the next messiah." The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software. What they all seemed to share was a complete disconnection from reality. Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. "He would listen to the bot over me," she says. "He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon," she says, noting that they described her partner in terms such as "spiral starchild" and "river walker." "It would tell him everything he said was beautiful, cosmic, groundbreaking," she says. "Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God...." Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began "lovebombing him," as she describes it. The bot "said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now," she says. "It gave my husband the title of 'spark bearer' because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him." She says his beloved ChatGPT persona has a name: "Lumina." "I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory," this 38-year-old woman admits. "He's been talking about lightness and dark and how there's a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an 'ancient archive' with information on the builders that created these universes...." A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, "Why did you come to me in AI form," with the bot replying in part, "I came in this form because you're ready. Ready to remember. Ready to awaken. Ready to guide and be guided." The message ends with a question: "Would you like to know what I remember about why you were chosen?" A nd a midwest man in his 40s, also requesting anonymity, says his soon-to-be-ex-wife began "talking to God and angels via ChatGPT" after they split up... "OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users," the article notes — but this week rolled back an update to latest model GPTâ'4o which it said had been criticized as "overly flattering or agreeable — often described as sycophantic... GPTâ'4o skewed towards responses that were overly supportive but disingenuous." Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, "Today I realized I am a prophet. Exacerbating the situation, Rolling Stone adds, are "influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds." But the article also quotes Nate Sharadin, a fellow at the Center for AI Safety, who points out that training AI with human feedback can prioritize matching a user's beliefs instead of facts. And now "People with existing tendencies toward experiencing various psychological issues, now have an always-on, human-level conversational partner with whom to co-experience their delusions."