AI and You: FCC Slams Robocalls, Google Bids Bard Farewell, Altman’s Trillion Dollar Chip Quest     – CNET

AI and You: FCC Slams Robocalls, Google Bids Bard Farewell, Altman’s Trillion Dollar Chip Quest – CNET

gettyimages 1458045238

One of the things that’s turned generative AI into a global phenomenon is how easy it is for just about anyone to use powerful tools to generate text, audio and video. And while there are many good uses for the tech, the bad use cases — including creating deepfakes designed to trick, scam and generally wreak havoc — have gotten otherwise slow moving government organizations to act faster to try to minimize those harms.

Case in point: About a month after New Hampshire voters got an AI-generated call mimicking President Joe Biden and telling them not to vote in the presidential primary, the Federal Communications Commission last week made fake robocall voices illegal. CNET’s Gael Cooper has an explainer, noting that the FCC has been working on this issue since November and the agency is also hoping to use AI to create tech that could stop such illegal calls from even going out in the first place.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” FCC Chairwoman Jessica Rosenworcel said in a statement. “State attorneys general will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.” 

You can use the FCC’s online form to file a complaint about a robocall — AI generated or not. The fake Biden robocalls, by the way, apparently originated with a Texas company, according to the New Hampshire attorney general, who’s started a criminal investigation. 

The Biden administration, which released an executive order on AI in October calling for standards and guardrails to ensure AI tech is safe and secure, said government agencies have “completed all of the 90-day actions tasked by the EO and advanced other vital directives that the Order tasked over a longer timeframe.” Among those actions: creating an AI Safety Institute in the Department of Commerce to set standards for AI systems so they’re not used, among other things, to create chemical, biological or nuclear weapons, said Ben Buchanan, White House special advisor for AI. Another ask: having AI developers share safety test results that allow the government to check and make sure they meet those safety standards before being put out in the market in 2024 and beyond.

“It’s always a fair question for any American to say [when] the government does a big rollout, something that makes a big deal and an executive order, that they actually follow through — and it’s especially important on a fast-moving technology like AI,” Buchanan told me in an interview. Beyond concerns about weapons, “we recognize that there’s an enormous potential for harm from disinformation and the like, particularly around deepfakes or AI generated content that looks real.”

Silicon Valley companies met with the White House in July 2023 and said they would voluntarily build watermarking standards for their content and “some of them have started to roll that out already.” (Meanwhile, the European Union continues to push into law its AI Act, “marking the first rules for AI in the world aiming to make it safe and in respect of fundamental rights, Belgium, which holds the presidency of the council of the European Union for 2024, said in an X post on Feb. 2.) 

In the US, the devil is in the details. OpenAI, maker of ChatGPT and Dall-E 3, and Meta, which owns Facebook and Instagram, both this week said they will label AI-created images and video to highlight them for their audience. I say highlight and not “crackdown on” deepfakes because the labeling doesn’t mean the image will be taken down — just that you’re aware that it wasn’t made by a human. (I think this falls under the “do your own research.”)

OpenAI said it will add a watermark to images created with ChatGPT and Dall-E as part of an effort for “establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information,” according to its blog post

The problem: Those watermarks are “not a silver bullet to address issues of provenance,” OpenAI added in the post. “It can easily be removed either accidentally or intentionally. For example, most social media platforms today remove metadata from uploaded images, and actions like taking a screenshot can also remove it.”

So an image generated with AI will have a watermark that can be easily removed? Gotta say, that will make doing my own research a lot harder.

As for Meta, it shared some information about its AI labeling plans in a post a day after the company was criticized by its own Oversight Board for having a manipulated media policy that is “incoherent,” according to Axios

Meta, in a Feb. 6 post by Global Affairs President Nick Clegg, said the company already puts a disclosure — “Imagined with AI” on photorealistic images created using its Meta AI feature. The company is now working to develop an industry standard with partners that will enable it in coming months to label “content created with other companies’ tools too.” That is content that appears on Facebook, Instagram and on Meta’s Threads social media platform. Those other tool makers include Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock.

Meta is also working on making it “more difficult to remove or alter invisible watermarks.” And while it’s working out labeling for AI-generated audio and video, it’s asking users “to disclose when they share AI-generated video or audio so we can add a label to it. We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so.”

The labeling, he adds, is all about giving the billions of people who use Meta’s popular platforms “more information and context.” Said Clegg, “If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context.”

And for all of those context-seeking, information gatherers, Clegg offers some advice while it’s working out the details of its watermarking tech and policy. “In the meantime, it’s important people consider several things when determining if content has been created by AI, like checking whether the account sharing the content is trustworthy or looking for details that might look or sound unnatural.”

Here are the other doings in AI worth your attention.

Fraudsters net over $25 million in AI deepfake heist

Speaking of ways that deepfakes could be co opted to carry out scams, CNN reported that a finance worker in Hong Kong was tricked into paying out about $25.6 million “to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.”

The worker was apparently “duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing.” (I”m going to assume the briefing was live, and not over video, and that the police presenting the news weren’t deepfakes.)

The staffer initially thought an email that he received about handling the transaction was a phishing scam “as it talked of the need for a secret transaction to be carried out,” CNN reported, citing the staffer, who was identified as senior superintendent Baron Chan Shun-ching. But the video call eased his concerns because everyone looked and sounded like the people he knew. He only found out that we has been scammed by the deepfake CFO after checking with company headquarters. 

Oof.

Farewell Bard, greetings Gemini

Google, in competition with OpenAI, Microsoft, Anthropic and other makers of gen AI large language models, has ditched the name Bard and opted to now call its gen AI tools Gemini after the LLM it rolled out in December.

Bard “evoked poetic qualities of a bygone era, but apparently not enough of our AI future,” reports CNET’s Lisa Lacy, who gives us the TL;DR on the name change. “Bard won’t change much, despite the new name, logo, apps and gemini.google.com website. But now there’s also a premium, paid Gemini version called Ultra too.”

Gemini might misidentify itself as Bard, however, as it struggles with self-awareness during the transition period, according to Sissie Hsiao, vice president and general manager of Gemini experiences and Google Assistant.

As for Google Assistant, Lacy reports, “Gemini will become the primary assistant on Android phones for people who download the app and opt in. This signals the beginning of the end for Google Assistant, at least on mobile.”

AI is going to be built into a lot of PCs in the next three years

Market research firm IDC says more than half of all PCs sold by 2027 will include AI built in, or “specific system-on-a-chip (SoC) capabilities designed to run generative AI tasks locally.” There will be about 50 million such AI PCs shipped in 2024, with that number rising to over 167 million by 2027 — accounting for nearly 60% of all PCs shipped worldwide, IDC said in its forecast. 

Here’s what the change means, technically speaking: “Until recently, running an AI task locally on a PC was done on the central processing unit (CPU), the graphics processing unit (GPU), or a combination of the two. However, this can have a negative impact on the PC’s performance and battery life because these chips are not optimized to run AI efficiently. PC silicon vendors have now introduced AI-specific silicon to their SoCs called neural processing units (NPUs) that run these tasks more efficiently.”

Adds IDC analyst Tom Mainelli, “As we enter a new year, the hype around generative AI has reached a fever pitch, and the PC industry is running fast to capitalize on the expected benefits of bringing AI capabilities down from the cloud to the client.” 

OpenAI seeks ‘trillions of dollars’ to reshape the AI chip business

Sure Intel, Nvidia and other chipmakers will play an important role in designing those AI chips to power all of our AI-enabled devices in the not-too-distant future. But they’re not the only ones looking to remake the semiconductor industry with AI in mind.

Sam Altman, CEO of OpenAI, is reportedly in talks with investors to “raise funds for a wildly ambitious tech initiative that would boost the world’s chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars,” the Wall Street Journal reported, citing people familiar with the matter. Those sources told the paper that Altman is telling investors, including the United Arab Emirates government, that the project would need $5 trillion to $7 trillion. That’s not a typo — we’re talking trillions of dollars. 

Altman wants to fuel chip development and production because it’s a gating factor to OpenAI’s growth. There’s a scarcity of the “pricey” chips needed to run and develop the LLMs behind OpenAI’s chatbots, including ChatGPT. Says the WSJ, “Altman has often complained that there aren’t enough of these kinds of chips — known as graphics processing units, or GPUs — to power OpenAI’s quest for artificial general intelligence, which it defines as systems that are broadly smarter than humans.” 

In case you didn’t know, today’s gen AI systems aren’t AGI’s like Hal from 2001: A Space Odyssey or Jarvis from the Marvel movies. They’re more autocomplete on steroids, with some hallucinations thrown in.

Here’s the hitch to the trillion dollar pitch. The money he may need is ore than the total current size of the world’s chip industry. Global semiconductor sales were $527 billion last year, the WSJ said, adding “The amounts Altman has discussed would also be outlandishly large by the standards of corporate fundraising — larger than the national debt of some major global economies and bigger than giant sovereign-wealth funds.”

Still, I guess you got to dream big to make big things happen. Altman, adds the WSJ, is also investing in startups working on making cheap energy from nuclear fusion because “AI facilities consume enormous amounts of electricity.” 

And like Oracle founder Larry Ellison, Altman, 38, is also interested in investing in tech that extends human life. 

TechNet touts AI for everyday consumers

Already seen all the new Super Bowl ads? TechNet, a non-partisan advocacy organization for the tech industry, released a new ad as part of its $25 million AI for America education campaign to pitch the advantages of AI to consumers.The 30-second spot, This is AI, actually does a good job flicking at some of the ways that AI can help in the not-so-distant future, especially — in health care and weather prediction. 

In addition, the AI for America hub highlights stories from news organizations around the US with examples showing AI’s potential to help solve human problems.

And if you’re looking for examples of how businesses can take advantage of AI, McKinsey offers up a list of “Ten unsung digital and AI ideas shaping business.” I’m a fan of No. 6: A workforce with gen AI “superpowers” needs a human breakthrough.

“Gen AI has started as a copilot technology and may evolve to become an automated pilot for some tasks. This essentially means everyone will have a utility belt of AI superpowers, creating a workforce of “superworkers,” McKinsey’ authors note. “Tech breakthroughs have increased productivity and created both different and more work for humans. For this reason, companies need to shift their focus to human breakthroughs in learning, reskilling, upskilling and career management to enable their workforce to best take advantage of gen AI and other technologies.”

AI tip of the week: How to write an effective prompt 

There are hundreds, if not thousands, of videos and writing guides offering advice on how to write an effective prompt for ChatGPT and other chatbots. I’ll highlight a brief one that encapsulates a lot of the thinking out there. From Harvard University’s Information Technology service, here are three suggestions I think are the most important:

  1. Be specific: Generic prompts like “Write a story” will produce generic results. What kind of story do you want? What genre? Is it for adults or children? How long should it be? Is it funny or serious?

  2. Use do and don’t: Telling AI what you do and don’t want in your response can save time and improve your result. (For example, Italian-themed dinner recipes for the gluten-free.)

  3. Consider tone and audience: Give the AI specifics about who your audience is and what sort of tone you’d like to set. For example, “Give me ideas for a best man’s speech that is funny and heartwarming but appropriate for a family audience” will generate better results than just “Write a best man’s speech.”

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

https://www.cnet.com/rss/all/

Connie Guglielmo

Leave a Reply