AI and You: The AI Blame Game, Apple Hints at AI, Gen Z OK With AI Costing Co-Workers Their Jobs     – CNET

AI and You: The AI Blame Game, Apple Hints at AI, Gen Z OK With AI Costing Co-Workers Their Jobs – CNET

gettyimages 605695785

In addition to “the dog ate my homework,” “I didn’t see the email” and “it’s an honest oversight,” you can now add “the AI did it” to the list of excuses people may use to avoid taking responsibility for something they did or said. 

Case in point: An Australian news station apologized after showing an altered photo of a member of Parliament for the state of Victoria that it then claimed had been edited by an AI tool in Adobe Photoshop, according to The Sydney Morning Herald. But that apology — and the “AI did it” excuse — came only after the politician, Georgie Purcell, posted the original photo of her alongside the edited one on social media. Purcell said “having my body and outfit photoshopped by a media outlet” wasn’t something she expected in an otherwise busy day in which the Animal Justice Party member was pushing for changes to duck hunting rules.

“Note the enlarged boobs and outfit to be made more revealing. Can’t imagine this happening to a male MP,” Purcell wrote in her posting on X.

The news station, 9News, called it a “graphics error” and blamed it on Adobe rather than human error. “As is common practice, the image was resized to fit our specs. During that process, the automation by Photoshop created an image that was not consistent with our original,” the news director Hugh Nailon said in widely reported statement. 

Adobe, which makes the popular photo editing tool, wasn’t having it and said any changes made to the image would have required “human intervention and approval.” The news station put out a subsequent statement saying there was, in fact, “human intervention in the decision.”

While the AI tool may have made changes consistent with selfie filters, Rob Nicholls, a professorial fellow at the University of Technology Sydney, told The New York Times that doesn’t explain why the news station didn’t check the image against the original. “Using AI without strong editorial controls runs the risk of making very significant errors,” he said, adding that AI can replicate existing biases. “I don’t think it’s coincidental that these issues tend to be gendered.”

Is it any wonder, then, that some politicians are blaming AI deepfakes for “damning photos, videos and audio” of things they may have actually said or done, The Washington Post reported. “Former president Donald Trump dismissed an ad on Fox News featuring video of his well-documented public gaffes — including his struggle to pronounce the word “anonymous” in Montana and his visit to the California town of “Pleasure,” a.k.a. Paradise, both in 2018 — claiming the footage was generated by AI,” the paper said, adding that the images in the ad were widely covered and “witnessed in real life by many independent observers.”

What does it all mean? That separating fact from fiction is only going to get harder. Hany Farid, a professor at the University of California at Berkeley, said that deepfakes aside, AI is creating what he calls a “liar’s dividend.” 

“When you actually do catch a police officer or politician saying something awful, they have plausible deniability” in the age of AI, he told the Post.

Here are the other doings in AI worth your attention.

Is it real or AI?

There isn’t a magic tool, as least not yet, that can distinguish between AI-generated audio, video and text and human-created creative endeavors. But I’m reminded that the nonpartisan and nonprofit News Literacy Project has a helpful one-page infographic that offers up “six new literacy takeaways and implications to keep in mind as this technology continues to evolve.” 

No. 5 is that AI “signals a change in the nature of evidence…Don’t let AI technology undermine your willingness to trust anything you see and hear” in the news. “Just be careful about what you accept as authentic.”

So when it comes to photos and videos, remember that “the rise of more convincing fake photos and videos means that finding the source and context for visuals is often more important than hunting for visual clues of authenticity. Any viral image you can’t verify through a reliable source — using a reverse image search, for example — should be approached with skepticism.”

And on the content creator side, I’ll add that just because you can create something with an AI tool — for any number of legitimate reasons — doesn’t mean you necessarily should. Case in point: Instacart deleted some AI-generated images of food in its online recipes that were “very weird,” according to Business Insider.  

“The images began raising eyebrows on the Instacart subreddit in early January, when users started to compile their favorite absurdities,” Insider said. The AI images featured physically impossible compositions, unnatural shadows and strangely blended textures. “For instance, pictures accompanying a recipe for ‘Chicken Inasal’ showed two chickens conjoined at the shoulder, while the ‘Hot Dog Stir Fry’ photo showed a slice of hot dog with the interior texture of a tomato,” Insider wrote.

Instacart told Insider that it is experimenting with AI to create food images and working to improve what it offers to readers. “When we receive reports of AI-generated content that does not deliver a high-quality consumer experience, oru team reviews the content and may remove it,” Instacart said.

Expect to see more of these kinds of experiments since the technology is still really in its infancy. “Language and text took the forefront in 2023, but image and video will be on the upswing for 2024,” Forrester Research said in its predictions for the year. “While 2023 was a year of excitement, experimentation, and the first stages of mass adoption, 2024 will be a year of scaling and optimization and solving the hard problems differentiating yourself as a superhero in a world where everyone has superpowers.”

Google’s Bard gets into the picture

Google, which has been updating its Bard conversational AI to better compete with rival chatbots like ChatGPT, announced this week in a blog post that users can now generate images for free as part of a text-to-image update. 

When someone types in a prompt, like “create an image of a hot air balloon flying over the mountains at sunset,” Bard generates what Google describes as “custom, wide-ranging visuals to help bring your idea to life,” reports CNET’s Lisa Lacy, who tried out the text-to-image functionality.

“Google’s post noted that Bard includes a distinction between visuals created with Bard and original human artwork, and it embeds watermarks into the pixels of generated images. To test this, I asked it to create an image of Botticelli’s Birth of Venus. It offered up a replica, but sloppier,” she said, calling out problems with the face and hands in the image. That may be why Bard also offers an option to report a legal issue and to give each image a thumbs up or thumbs down.

And with all the talk about AI being used to create deepfakes — with Taylor Swift being the most prominent victim in recent weeks — Lacy noted that Google aims to limit “violent, offensive or sexually explicit content” and applies filters to avoid the generation of images of named people. Indeed, it declined to create an image of Super Bowl quarterbacks Patrick Mahomes and Brock Purdy having a picnic or Beyonce at the bank.”

Apple teases AI, Cook “incredibly excited” about whatever it is

CEO Tim Cook used Apple’s quarterly earnings call to share the news that the company sees a “huge opportunity for Apple with gen AI and AI” this year and that “we have a lot of work going on internally.” 

“We’ve got some things that we are incredibly excited about that we’ll be talking about later this year,” Cook said, deflecting further questions about Apple’s AI plans. “Our M.O., if you will, has always been to do work and then talk about work, and not to get out in front of ourselves. And so we’re going to hold that to this as well.”

A transcript of the call can be found here.

Reports emerged last July that Apple was working on an AI chatbot called Apple GPT and a large language model called Ajax, but the company didn’t comment at the time, says CNET’s Lisa Lacy, who listened in to the call. She adds that Apple has “so far has been conspicuously absent from the generative AI frenzy that’s engulfed Big Tech. Google, Microsoft, OpenAI and others have spent the last year fine-tuning their chatbots as they vie for market share. But it’s very much on brand for Apple to take its time prior to entering a new product category — and to subsequently transform that space completely. Look no further than products like the iPhone and the iPad.” 

FCC chair wants to make AI-generated robocalls illegal

A week after a bad actor(s) sent out a robocall faking President Joe Biden’s voice and telling thousands of New Hampshire voters not to vote in the presidential primary, Federal Communications Commission Chair Jessica Rosenworcel proposed that the agency “recognize calls made with AI-generated voices are “artificial” voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocalls scams targeting consumers illegal.”

“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate. No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls,” Rosenworcel said in a press statement.

The FCC said that this step builds on a “notice of inquiry” it launched in November to explore ways on how it could combat illegal robocalls, including those using AI technology. 

The TCPA, enacted in 1991, is the primary law the FCC says it uses to limit junk calls, and an expansion of the definition of artificial voices could help states’ attorneys general pursue bad actors. The TCPA “restricts the making of telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages. Under FCC rules, it also requires telemarketers to obtain prior express written consent from consumers before robocalling them. If successfully enacted, this Declaratory Ruling would ensure AI-generated voice calls are also held to those same standards,” the FCC said.

In case you’re wondering if the agency has gone after anyone using the law, CNN reported that it’s “been used in anti-robocall crackdowns, including a case against conservative activists Jacob Wohl and Jack Burkman for carrying out a voter suppression campaign during the 2020 election. The campaign by Wohl and Burkman prompted the FCC to fine them $5 million, a record-breaking figure at the time.”

On the robocall front, there were nearly 55 billion robocalls made in the US last year, up from 50.3 billion in 2022, according to YouMail, a robocall blocking service. So yes, you’re right in thinking you’re getting more annoying calls these days. 

OpenAI says its AI likely won’t be used to create bio weapons

OpenAI, maker of ChatGPT, released a study this week after asking 50 biology experts and 50 students to test whether its large language model, GPT-4, “has the potential to help users create harmful biological weapons.”

The sort of good news: v-pre They said the LLM “provides at most a mild uplift in biological threat creation accuracy.” More study is required, they added.

OpenAI said it wanted to look at a hypothetical example in which “a malicious actor might use a highly capable model to develop a step-by-step protocol, troubleshoot wet-lab procedures, or even autonomously execute steps of the biothreat creation process when given access to tools” for conducting experiments in the cloud.

As part of this work, the company said it’s working on building “an early warning system for LLMs being capable of assisting in threat creation. Current models turn out to be, at most, mildly useful for this kind of misuse, and we will continue evolving our evaluation blueprint for the future.”

The work comes as President Joe Biden signed an executive order in October to put in place AI safeguards, including guarding against the tech being used to create chemical, biological, radiological or nuclear weapons. At the same time, OpenAI created a “preparedness” team to assess the threats from its technology.

That’s sort of good news, right?

Everything you need to know about ChatGPT

Speaking of OpenAI and ChatGPT, two other things worth mentioning.

First, if you’re looking for recap of how ChatGPT works and how it’s evolving, CNET’s Stephen Shankland has a straightforward explainer on what you need to know about the tool released in November 2022. That includes its origin story and how to use ChatGPT. 

“ChatGPT and the generative AI technology behind it aren’t a surprise anymore, but keeping track of what it can do can be a challenge as new abilities arrive,” Shankland writes. “Most notably, OpenAI now lets anyone write custom AI apps called GPTs and share them on its own app store. While OpenAI is leading the generative AI charge, it’s hotly pursued by Microsoft, Google and startups far and wide.”

Second, OpenAI said it was partnering with Common Sense Media, “the nonprofit organization that reviews and ranks the suitability of various media and tech for kids, to collaborate on AI guidelines and education materials for parents, educators and young adults,” TechCrunch reported

That includes putting “nutrition labels” on AI-powered apps that highlight how they can help or hurt kids, teens and families, including curating those custom apps mentioned by Shankland that are now being offered in OpenAI’s new GPT Store

“Our guides and curation will be designed to educate families and educators about safe, responsible use of ChatGPT, so that we can collectively avoid any unintended consequences of this emerging technology,” Common Sense Media CEO and founder James P. Steyer said in a statement

Let the job retraining begin 

In a January survey of who and how AI and generative AI will be adopted, the Boston Consulting Group found that “generative AI will revolutionize the world — and executives want to capitalize on it” after surveying 1,406 C-level executives in 14 industries around the world. While the execs said they plan to invest more in AI and gen AI tech in 2024 (85%), and that they expect to see productivity gains and cost savings from adopting these new tools, it’s obvious that the devil remains in the details. Here are a few interesting data points about AI usage on the job and among the staff that I pulled out of the report: 

  • 95% of executives said they now allow AI and gen AI use at work, “a start shift from July 2023, when more than 50% actively discouraged such use.” 

  • 66% of those surveyed said they are “outright dissatisfied or ambivalent with their organization’s progress on AI and generative AI so far, with the top three reasons being lack of talent and skills, unclear roadmaps, and no overall strategy for responsible use.”  

  • Executives said that 46% of their workforce, on average, “will need to undergo upskilling in the next three years due to gen AI.” So far, only 6% of companies surveyed have managed to train over one-fourth of their staff on gen AI tools. 

  • 74% agree that “significant change management is required” to deal with gen AI adoption.

  • 59% of these C-suite leaders said they have “limited or no confidence in their executive team’s proficiency in gen AI.” Ouch.

  • 65% of executives believe it will take at least two years for AI and gen AI to “move beyond hype.”

  • And last, but not least, 71% say they are focused on “limited experimentation and small pilots” with AI and gen AI. 

The TL;DR from BCG: Companies should be investing, retraining workers and figuring out how AI and gen AI will fit into their strategy today, so they’re not left behind. But the winners, the firm said, will need to make sure they are implementing responsible AI (RAI) principles. 

The complete report, From Potential to Profit with Gen AI, can be found here.

Amazon will help you shop with an AI assistant named Rufus

In case you aren’t already spending enough time or money on Amazon, the world’s largest e-commerce site just released a new gen AI-powered chatbot to help you make purchasing decisions. 

It’s called Rufus, and the company said in a blog post that it’s an “expert shopping assistant trained on Amazon’s product catalog and information from across the web to answer customer questions on shopping needs, products, and comparisons, make recommendations based on this context, and facilitate product discovery in the same Amazon shopping experience customers use regularly.” 

Rufus was launching in the US on Feb. 1 to a “small subset of customers in Amazon’s mobile app” and will be made available to other mobile customers in the US in coming weeks. 

Amazon says it’s been using AI for over 25 years to power its product recommendations, pick paths in its fulfillment centers, map out drone deliveries and help give voice to its Alexa voice assistant. It says that Rufus, based on conversational AI tech, “is going to change virtually all customer experiences we know.” 

How? By combining user-generated reviews to identify “common themes” and give shoppers “insights,” like how many other people bought whatever it is you’re looking at. It’s also using AI to help sellers on its site “write more engaging and effective titles and product descriptions.” You’ll also be able to shop by “occasion or purpose,” suggesting what you need to buy to start an indoor garden, for example. 

And in case you’re wondering why they called it Rufus, The New York Times reported that “Amazon allows its employees to bring their dogs to work, and a dog named Rufus was one of the first to roam its offices in the company’s early days.”  

Amazon announced the new shopping tool the same day it announced fourth-quarter earnings, in which it said sales rose 14% thanks to robust consumer demand during the holiday season.

Gen Z worries AI will replace them, but OK with it booting co-workers 

Instead of ending with a vocabulary word of the week, I thought I’d share a survey courtesy of EduBirdie, a writing platform, which decided to look at how GenZ is using AI and ChatGPT in the workplace. The company asked 2,000 Gen Zers (those born after 1996, it says) to answer questions about gen AI. Here are the top takeaways. You can read the complete study here.

  • Gen Z is most likely to use ChatGPT at work to do their research (61% said they had used AI for fact finding), with 23% saying they used the tool to get hired in the first place. (I guess they’re not worried about the hallucination problem when it comes to AI and facts.)

  • One in five of those in Gen Z who participated in the study have gotten in trouble for using ChatGPT at work, with 2% saying they were fired, 5% were almost fired and 6% saying they were “told off.”

  • 36% said they felt guilty about using ChatGPT to help them do work tasks. 

  • One in 10 Gen Zers fear that AI could take their job in 2024, with 47% saying they either think that’s very likely or possible this year.

  • 61% think AI could take their job within a decade. 

  • 49% say that ChatGPT has made them more creative, 49% have said it has helped them be less stressed, 46% said it’s helped them become more productive and 15% said they earned more money thanks to ChatGPT.

  • 35% of Gen Z wouldn’t mind if AI took the place of one of their colleagues, with “I would care much” and “it would be their fault for not working harder” given as the reason for their lack of concern for some of their colleagues. (Ouch.)

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

https://www.cnet.com/rss/all/

Connie Guglielmo

Leave a Reply