• AIJ Newsletter
  • Posts
  • OpenAI Hires Chief Economist, ASI at Risk, Anthropic's AI Model, and Protests from Creatives

OpenAI Hires Chief Economist, ASI at Risk, Anthropic's AI Model, and Protests from Creatives

In partnership with

Happy Wednesday, AI & Data Enthusiasts! OpenAI continues to hire new talent, and this week Aaron Chatterji has joined OpenAI as its first Chief Economist to navigate through the complexities of AI, Economics, and compliance. Let’s dive more into this, and other developments in AI in our newsletter.

In today’s edition: 

  • OpenAI Hires Its First-Ever Chief Economist

  • The Future of the U.S. AI Safety Institute(ASI) at Risk: Will Congress Act?

  • Anthropic’s New AI Model Takes Control of Your Computer

  • Thousands of Creatives Rally Against AI Data Scraping

- Naseema Perveen

WHAT CAUGHT OUR ATTENTION MOST

OpenAI Hires its First-Ever Chief Economist

In a strategic move to deepen its influence and ensure robust governance, OpenAI has announced two high-profile hires. These appointments highlight the company’s commitment to navigating the complex intersections of AI, economics, and compliance.

  • Aaron Chatterji Joins as Chief Economist: OpenAI has appointed Aaron Chatterji, a former chief economist at the U.S. Commerce Department, to lead research on AI’s economic impacts. With his experience advising the Obama administration and coordinating the 2022 CHIPS Act, Chatterji brings valuable political and economic insight to OpenAI, particularly as the company explores chip design and hardware development for its AI systems.

  • CHIPS Act Expertise: Chatterji played a key role in implementing the CHIPS Act, which allocated $280 billion for U.S. semiconductor development. His deep understanding of this initiative will likely aid OpenAI’s future hardware efforts, giving the company a competitive edge as it integrates advanced AI technologies with cutting-edge chip designs.

  • Scott Schools Named Chief Compliance Officer: In another significant hire, OpenAI has welcomed Scott Schools, a former associate deputy attorney general and Uber’s compliance head, as its new chief compliance officer. Schools will be responsible for overseeing OpenAI’s legal and ethical practices, ensuring that the company adheres to regulatory requirements as it continues to push the boundaries of AI. H

These strategic appointments position OpenAI to tackle both the economic implications of AI and the increasingly complex regulatory landscape, ensuring its continued leadership in the AI space. As the company pushes forward, these hires reflect a focus on sustainable, compliant, and impactful growth.

IN PARTNERSHIP WITH VINOVEST

The Rising Demand for Whiskey: A Smart Investor’s Choice

Why are 250,000 Vinovest customers investing in whiskey?

In a word - consumption.

Global alcohol consumption is on the rise, with projections hitting new peaks by 2028. Whiskey, in particular, is experiencing significant growth, with the number of US craft distilleries quadrupling in the past decade. Younger generations are moving from beer to cocktails, boosting whiskey's popularity.

That’s not all.

Whiskey's tangible nature, market resilience, and Vinovest’s strategic approach make whiskey a smart addition to any diversified portfolio.

KEEP YOUR EYE ON IT

The Future of the U.S. AI Safety Institute at Risk: Will Congress Act?

The U.S. AI Safety Institute (AISI), a key government body created to assess AI risks, faces an uncertain future. Established in November 2023 as part of President Biden's AI Executive Order, the AISI operates within NIST, studying AI system safety and collaborating with its U.K. counterpart. Without congressional backing, a future president could dismantle the institute by repealing the executive order.

  • Congressional Authorization Needed: AI industry leaders are pushing for formal legislation to secure AISI’s future and ensure long-term funding. Currently operating on a modest $10 million budget, the institute's potential dismantling could leave the U.S. lagging behind international AI safety efforts.

  • Industry and Academia Call for Action: Over 60 companies, including major tech players like OpenAI and Anthropic, have urged Congress to act before the year ends. The U.S. risks ceding leadership in AI safety to foreign nations, many of which are already advancing AI standards through collaborative efforts.

  • Bipartisan Efforts in Congress: Although bipartisan bills to authorize AISI's activities have advanced in both chambers, opposition from some lawmakers, including Sen. Ted Cruz, poses a challenge. Critics have called for changes to the bill, including reducing the institute's focus on diversity programs.

In an increasingly competitive global AI race, ensuring the continued existence of the AISI is crucial. With mounting pressure from industry, academia, and civil society, the fate of U.S. leadership in AI safety rests on Congress’ ability to pass necessary bipartisan legislation.

Anthropic’s New AI Model Takes Control of Your Computer

Anthropic's latest upgrade to its Claude 3.5 Sonnet model takes a leap in AI capabilities by introducing the ability to control desktop apps through a new “Computer Use” API. This AI can now imitate user interactions like keystrokes, mouse clicks, and navigating software. Developers can access this API via Anthropic’s platforms, including Amazon Bedrock and Google Cloud's Vertex AI.

  • Desktop Automation Breakthrough: The 3.5 Sonnet model can browse websites, interact with apps, and execute commands like a virtual assistant. It can respond to user prompts and emulate human actions such as filling out forms or navigating complex software. While automation isn’t new, Anthropic’s model offers improvements in accuracy and coding tasks compared to competitors like OpenAI’s GPT-4.

  • Challenges in Accuracy: Despite its strengths, the 3.5 Sonnet still struggles with basic actions like scrolling and zooming, completing less than half of airline booking tasks in tests. Anthropic admits the model remains slow and error-prone, recommending developers start with low-risk tasks.

  • Safety Concerns: While Anthropic acknowledges potential security risks, including the possibility of malicious use via jailbreaks, the company believes gradual deployment will allow for real-time learning and safety enhancements. They’ve implemented precautions like limiting access to sensitive sites and retaining screenshots for 30 days.

Anthropic’s approach reflects the broader race in AI development, with tech giants like Microsoft, OpenAI, and Salesforce exploring AI agents. However, Claude 3.5 Sonnet’s ability to integrate desktop automation with online capabilities could give it an edge—though human oversight remains critical to managing risks.

Thousands of Creatives Rally Against AI Data Scraping

A growing backlash against AI data scraping has united 11,500 artists, writers, and musicians, including Kevin Bacon, Kazuo Ishiguro, and Robert Smith. These signatories have endorsed a petition demanding an end to the unlicensed use of creative works for AI training, citing its harmful impact on creators' livelihoods.

The petition highlights the concerns of creatives who feel their work is being dehumanized by AI companies referring to it as mere "training data." Organized by British composer Ed Newton-Rex, the movement reflects a broader pushback as AI development increasingly relies on scraping data without permission.

This protest arrives as lawmakers consider new regulations, including a potential "opt-out" model in the U.K., which could give creators more control over how their work is used in AI training.

🎤 VOICE YOUR OPINION

Which will have the most impact on UX in 2025?

Login or Subscribe to participate in polls.

ICYMI

  • Adobe unveiled Firefly’s new video generation interface.

  • Former Palantir chief security officer joined OpenAI.

  • Former OpenAI CTO Mira Murari is in talks to raise capital for a new startup.

  • Y Combinator invested in controversial AI startup PearAI.

  • Microsoft will let clients build AI agents for routine tasks starting in November.

MONEY MATTERS

  • Alphabet’s AI spin off SandboxAQ is raising funds at a $5 billion valuation.

  • Apple unveils iOS 18.1 public beta with AI-powered features.

  • Magic Leap creator Rony Abovitz raised $20 million for a new enterprise AI startup.

  • Live Aware, a platform that uses AI to provide insights about games raised a $4.8 million seed round.

  • Series Entertainment, a gen AI game studio, raised $28 million in new funding.

LINKS WE’RE LOVIN’

 Podcast: Why AI Agents Are the Next Big Thing in Tech.

Cheat sheet: Mastering Microsoft Copilot: The Ultimate Cheat Sheet for Professionals

Course: Explore generative AI with Copilot in Bing. 

Whitepaper: 6 Steps For Zero Downtime On A Manufacturing Line.

Watch: 2024 iPad Mini 7 Unboxing Review.

SHARE THE NEWSLETTER & GET REWARDS

Your referral count: 0

Or copy & paste your referral link to others: https://youraiexperience.beehiiv.com/subscribe?ref=PLACEHOLDER

What do you think of the newsletter?

Login or Subscribe to participate in polls.

That’s all for now. And, thanks for staying with us. If you have specific feedback, please let us know by leaving a comment or emailing us. We are here to serve you!

Join 130k+ AI and Data enthusiasts by subscribing to our LinkedIn page.

Become a sponsor of our next newsletter and connect with industry leaders and innovators.

Reply

or to participate.