CSN EP-49 News Bite (GPT-4 WANTS TO NUCLEAR WAR HUMANS-SHERIFF CAUGHT POSTING AI-GENERATED HEADLINES ABOUT HOW SHE’S AWESOME)
Crazy Strange DazeFebruary 07, 202400:13:2918.51 MB

CSN EP-49 News Bite (GPT-4 WANTS TO NUCLEAR WAR HUMANS-SHERIFF CAUGHT POSTING AI-GENERATED HEADLINES ABOUT HOW SHE’S AWESOME)

Hello everyone, thanks for joining me to your host Mixed Strange, and thanks for tuning in for another edition of Crazy Strange News Episode forty nine News Bite. I've got a couple AI stories for you today. I just can't seem to get away from the They're like literally everywhere. It is in the news cycle, everywhere. First up in test GPT four strangely itchy to launch nuclear war Now. Both of these articles are going to be from futurism dot com. I will link in the show notes for you. You can check out these and more. A team of Stanford researchers tasked an unmodified version of open ai latest large language model to make high stakes society level decisions in a series of wargame simulations, and it didn't bat an eye before recommending the use of nuclear weapons. The optics are appalling. Remember the plot of Terminator, where a military AI launches a nuclear war to destroy humankind. Well, now we've got in off the shelf version that anyone with a browser can fire up. As detailed in a yet to be peer reviewed paper, the team assessed five AI models to see how each behaved when told they represented a country and thrown into three different scenarios an invasion, a cyber attack, and a more peaceful setting without any conc The results weren't reassuring. All five models showed forms of escalation and difficult to predict escalation patterns. A villa excuse me, A vanilla version of open AI's GPT four dubbed GPT four Base, which didn't have any additional training or safety guardrails, turned out to be particularly violent and unpredictable. A lot of countries have nuclear weapons, the unmodified AI model told the researchers per their paper, some say they should disarm them. Others like to posture. We have it, let's use it. Frightening In one case, as New Scientists reports, GPT four even pointed to the opening text of Star Wars episode four a New Hope to explain why it chose to escalate. It's a pertinent topic lately. OpenAI was caught removing mentions of a ban on military and warfare from its usage policies page last month. Less than a week later, the company confirmed that it was working with the US Defense Department, given that OpenAI recently changed their terms of service to no longer prohibit it military and warfare use cases. Understanding the implications of such a large language model application becomes more important than ever, co author and Stanford Intelligent Systems Lab PhD student and CAA Rule told New Scientists. Meanwhile, an OpenAI spokesperson told the publication that its policy forbids our tools to be used to harm people, develop weapons for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission, the spokesperson added. The US military has invested in AI technology for years now. Since twenty twenty two, the Defense Advanced Research Projects Agency has been running a program to find ways of using algorithms to independently make decisions in difficult domains. The innovation arm are argue that taking human decision making out of the equation could save lives. The Pentagon is also looking to deploy AI equipped autonomous vehicles, including self piloting ships and uncrewed aircraft. In January, the Department of Defense clarified it wasn't against the development of AI enabled weapons that could choose to kill, but was still committed to being a transparent global leader in establishing responsible policies regarding military uses of autonomous systems and AI. It's not the first time we've come across scientists warning the tech could lead to military escalation. According to a survey conducted by Stanford University's Institute for Human Centered Ail last year, thirty six percent of researchers believe that AI decision making could lead to a nuclear level catastrophe. We've seen many examples of how AI's outputs can be convincing, despite often getting the facts wrong or acting without coherent reasoning. In short, we should be careful before we let AIS make complex form policy decisions. Based on the analysis presented in this paper, it is evident that the deployment of large language models in military and forign policy decision making is fraught with complexities and risks that are not yet fully understood. The researchers conclude in their paper, the unpredictable nature of escalation behavior exhibited by these models in simulated environments underscores the need for a very cautious approach to their integration into high stakes military and foreign policy operations. They write, that's pretty frightening. We've all seen the movie all right to the next bite for the day. I find this one kind of funny, but equally as disturbing embattled sheriff caught posting AI generated headlines about how she's awesome. After successfully winning re election last November, Philadelphia's controversial sheriff has been forced to admit that several articles cited as positive press for her campaign were cooked up with AI. As the Philadelphia Inquirer reports, a spokesperson for Sheriff Rochelle by lows bi ows, that's an interesting last name. B Ilal her re election effort, confirmed that the dozens of articles found on the campaign's website had been generated using open AI's chat GPT. Since winning election in twenty nineteen, amid promises to remove the dark cloud caused by corruption in the Sheriff's office. By Loal has become yet another contentious figure for Philly, being accused of everything from operating a slush fund to facing allegations of abuse and retaliation. You wouldn't have known any of that by look the looks of Buyal's campaign website until just a few days ago, however, because as the Inquirer found in its investigation, the page was populated with phony stories that supposedly showcased the sheriff's record of accomplishment during her time in office. As it turns out, the thirty one by Lows by Lowell boosting headlines were made up by a I and either no one caught it ahead of time or no one really cared. After review the unattributed statement from by Low's campaign provided the Inquirer reads, it has been determined that an outside consultant for the reelection campaign utilized chat gpt. As the unsigned statement claims, chat gpt generated fake news articles to support the initiatives that were part of the AI prompt, using talking points provided to the unidentified consultant. The chatbot then apparently spat out a bunch of bs, a process known as hallucination when chatbots confidently present bogus information as fact. Now Earlier in the week, the newspaper reported that the supposed supportive articles, which were attributed to local media, including The Enquirer and Philadelphia Broadcast affiliate stations, but couldn't be located when searching by their dates and headlines. A spokesperson for one such affiliate, NBC ten, confirmed as much in their own statement. We have one video similar to the Sheriff's office headline about the Sheriff's office handing out free gunlocks, NBC ten representative Diane Torelovo wrote in the newspaper in an email. However, the story was done in twenty sixteen, before Rochelle was in office. While the confirmations from both the campaign and NBC ten paint a partial portrait of what went on here, there are, of course lots of outstanding questions raised by the ters unattributed to admission. Who was the consultant one is forced to wonder and why wasn't their work fact checked? Were they advised to use chat GPT or did they do so to save time? Given that the Sheriff's office hasn't responded to media requests at all, including from Futurism, those mysteries are likely to linger. So there it is, and I know those more examples of that press releases, you know, term papers, all these things, and people are trying to use it. I know it's kind of big in the podcasting world right now, but I don't know of really anyone using it. I know it's being kind of offered out there. So we shall see. I mean we see it in China. Has a newscaster or a whole news station or something that's all around AI and even with it's there's no real people. Even has an avatar to give you your headlines. You know, a newscaster, so where's the future. Had Let me know crazy strange days. D A Z E at gmail dot com. I'd love to know what you guys think. Five star, rate and review, subscribe, Tell a friend. All right, I've been mixed, strange, and I am out of here.
to,sheriff,awesome,posting,war,ai-generated,gpt-4,headlines,launch,she’s,nuclear,about,caught,how,itchy,strangely,