🧠 Meta making AI moves

PLUS: Can Google out-process Nvidia?

πŸ“’ Sponsor | 🐦 Twitter | πŸ“° Past Editions

Good morning folks, welcome back to another edition of Big Brain!

Our theme today? Corporations and how much control they have over this burgeoning sector.

πŸ”¦ Today's AI spotlights:

  • Meta making AI moves

  • Can Google out-process Nvidia?

  • Corporate control of AI

πŸ”¦ META MAKING AI MOVES

It sounds like Meta has been thinking a lot about AI recently.

Yesterday saw two announcements by the company:

A new product called SAM, and a strategy for generating ads with AI.

First, let's meet SAM, also known as Segment Anything Model.

This model can identify objects within images and videos, even those it hasn't encountered in its training.

It can then select these objects so users can manipulate them via word-prompts.

Pretty cool, right? This sort of functionality will help average users with their graphic design quests.

Next, Meta plans to use generative AI to create ads for different companies by the end of the year. It's created a new team focused on AI tools under CPO Chris Cox.

Aside from hype, why is Meta moving in this direction?

Well, to put it bluntly...Meta is in desperate need of a new direction.

Between Apple hindering their ads business & a lack of results from their Metaverse bet, Meta cannot afford to miss the AI wave.

(Note: Omneky and HeyGen are two companies experimenting with generative ads. They're using OpenAI's DALLE-2 and GPT-3)

πŸ“° Read the Full Articles Here and Here

πŸ”¦ CAN GOOGLE OUT-PROCESS NVIDIA?

Looks like Google may have found a way to beat Nvidia at its own game.

Google has released new information about its AI-model-training supercomputers.

The big takeaway? Their proprietary Tensor Processing Unit (TPU) is more efficient than comparable Nvidia systems.

Google has strung over 4,000 of the chips together using custom-developed optical switches.

This created a supercomputer that is up to 1.7 times faster and 1.9 times more power-efficient than an Nvidia system of comparable size.

Google's largest publicly disclosed LLM, PaLM, was trained using two of these supercomputers over the span of 50 days.

Google's supercomputers make it easy to reconfigure connections between chips on the fly. This helps avoid problems while improving performance.

The company has hinted that it might be working on a new TPU to compete with Nvidia's H100.

πŸ“° Read the Full Articles Here (non paywall here)

πŸ”¦ CORPORATE CONTROL OF AI

AI is becoming increasingly controlled by Big Tech companies.

Experts worry that corporate incentives may lead to dangerous outcomes. This is due to companies rushing out products and sidelining safety concerns.

As AI applications become more widespread, incidents of ethical misuse increase as well.

Legislators and policymakers are showing increasing interest in AI regulation. Some think that this could provide a counterweight to corporate self-regulation.

One area of concern for many is climate change.

Training large AI language models has significant environmental costs.

However, AI also has the potential to reduce emissions. DeepMind exemplified this by creating an AI system that reduced energy consumption in their data centers.

πŸ“Š Do big corporations have too much control over AI development?

Login or Subscribe to participate in polls.

πŸ“° OTHER HEADLINES & STORIES

πŸ› οΈ AWESOME AI TOOLS

  • SiteGPT: ChatGPT for every website

  • Rask: AI video localization & dubbing app

  • WisdomAI by Searchie: Generative AI chat for your audio and video content

  • WAGPT: Voice & text messaging with ChatGPT on WhatsApp

  • AI Backdrop: Generate any hyper realistic background for anything

πŸ–ΌοΈ AI ART PIECE

AI Landscapes β€” by Aaron Sharland

🐦 AI TWEET THREAD OF THE DAY

πŸ€– CHATGPT PROMPT OF THE DAY

To be fair, I always considered the movie Shrek to be a piece of fine art πŸ˜‚

How would you rate today's newsletter?

Login or Subscribe to participate in polls.