B2B/B2C Technology Writer https://www.eweek.com/author/aminu-abdullahi/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Thu, 06 Mar 2025 07:03:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 $13B Microsoft-OpenAI Deal Finally Gets UK Regulators’ Approval https://www.eweek.com/artificial-intelligence/microsoft-openai-uk-regulators-cma/ Thu, 06 Mar 2025 07:03:24 +0000 https://www.eweek.com/?p=232797 The U.K.’s Competition and Markets Authority (CMA) has given the green light to Microsoft’s $13 billion partnership with OpenAI, concluding the deal does not warrant a full investigation under U.K. merger rules. The CMA had been investigating whether Microsoft’s growing involvement with OpenAI amounted to a takeover, which could have raised competition concerns. However, after […]

The post $13B Microsoft-OpenAI Deal Finally Gets UK Regulators’ Approval appeared first on eWEEK.

]]>
The U.K.’s Competition and Markets Authority (CMA) has given the green light to Microsoft’s $13 billion partnership with OpenAI, concluding the deal does not warrant a full investigation under U.K. merger rules.

The CMA had been investigating whether Microsoft’s growing involvement with OpenAI amounted to a takeover, which could have raised competition concerns. However, after extensive analysis, the regulator announced on Wednesday that Microsoft holds “material influence” over OpenAI but does not have “de facto control.”

No change in control, says CMA

The investigation was triggered by OpenAI’s leadership shake-up in November 2023, when CEO Sam Altman was briefly ousted and later reinstated. This raised concerns about Microsoft’s influence, given its financial backing and deep integration with OpenAI’s AI models in its products.

After reviewing extensive documentation and consulting both companies, the CMA concluded that while Microsoft plays a significant role in OpenAI’s operations — especially in terms of funding, technology, and cloud computing power — it does not outright dictate the company’s policies or direction.

“Looking at the evidence in the round (including the recent changes), we have found that there has not been a change of control by Microsoft from material influence to de facto control over OpenAI,” said Joel Bamford, executive director of mergers at the CMA. “Because this change of control has not happened, the partnership in its current form does not qualify for review under the UK’s merger control regime.”

A win for Microsoft, but not a ‘clean bill of health’

Despite the CMA’s ruling, the regulator was quick to clarify that the decision does not mean Microsoft’s AI dealings are free from competition concerns. Instead, the agency emphasized the need for ongoing vigilance in monitoring the fast developing AI sector.

Bamford wrote, “The CMA’s findings on jurisdiction should not be read as the partnership being given a clean bill of health on potential competition concerns; but the UK merger control regime must of course operate within the remit set down by Parliament.”

The U.K. authority acknowledged that the prolonged nature of the review — stretching over 14 months — was due to the complex and evolving nature of the Microsoft-OpenAI relationship. As recently as January 2025, Microsoft adjusted its contractual agreements to lessen OpenAI’s reliance on its computing infrastructure, a move that likely played a role in securing the CMA’s approval.

The CMA’s clearance marks a regulatory win for Microsoft, which has been facing increasing scrutiny over its AI ambitions. In the U.S., the Federal Trade Commission (FTC) has raised concerns that Microsoft’s partnership with OpenAI could reinforce its dominance in cloud computing and give it an unfair edge in the AI race.

Regulators still watching AI deals closely

The CMA’s ruling reflects the broader regulatory debate around big tech’s growing influence in AI. The watchdog has been keeping a close eye on major AI investments, recently clearing Google’s and Amazon’s partnerships with AI startup Anthropic.

Despite the clearance, regulators worldwide remain cautious about how tech giants are shaping the future of artificial intelligence.

The post $13B Microsoft-OpenAI Deal Finally Gets UK Regulators’ Approval appeared first on eWEEK.

]]>
Sesame AI’s New Voice Assistant: ‘Almost Human’ https://www.eweek.com/news/sesame-ai-voice-assistant/ Tue, 04 Mar 2025 21:33:51 +0000 https://www.eweek.com/?p=232752 Sesame AI’s voice assistant uses advanced speech technology to create natural, emotionally aware conversations that feel more human than ever.

The post Sesame AI’s New Voice Assistant: ‘Almost Human’ appeared first on eWEEK.

]]>
A new AI-powered voice assistant from Sesame AI is pushing the boundaries of human-like interaction, using advanced speech technology to create more natural and emotionally aware conversations.

Unlike most AI voices that sound flat or mechanical, Sesame’s assistant, available in two voices — Maya and Miles — feels expressive, responsive, and emotionally aware. The company is focused on what it calls “voice presence,” a mix of emotional intelligence, natural timing, and context awareness that makes conversations feel personal.

Maya, for instance, can recognize and adjust her tone based on the situation, adding pauses, adjusting volume, and even altering her rhythm to create a more natural, engaging conversation.

After testing Sesame AI, I’m impressed

I tested the tool, and I was genuinely impressed by how responsive it was. Sesame AI doesn’t just execute commands — it listens and engages. During my testing, it responded appropriately to questions and even picked up on my mood. When I sounded tired or unenthusiastic, it asked if everything was okay and even tried to cheer me up with a joke.

Most voice assistants feel like robots — they follow commands, answer questions, and sometimes crack a joke, but there’s always something missing: real personality. That’s what Sesame AI is trying to change. With human-like voices and a knack for conversation, it’s not just answering your questions — it’s engaging with you, sometimes in ways that feel too real.

What’s next for Sesame?

The technology is still in its early stages, and Sesame says more advancements are on the way.

“Building a digital companion with voice presence is not easy, but we are making steady progress on multiple fronts, including personality, memory, expressivity and appropriateness,” the company said in a statement.

Sesame is also developing AI-powered glasses designed to be worn all day. These glasses aim to provide all-day access to the assistant, enabling it to “see” the world alongside you.

The startup has already secured funding from major investors like Andreessen Horowitz, Spark Capital, and Matrix Partners — all early backers of Oculus VR. Sesame also plans to open source its AI models and expand to more than 20 languages in the coming months.

The post Sesame AI’s New Voice Assistant: ‘Almost Human’ appeared first on eWEEK.

]]>
Google Demos Gemini AI Vision Features at MWC 2025: Here’s When You Can Get Them https://www.eweek.com/news/google-gemini-ai-mwc-2025/ Tue, 04 Mar 2025 17:54:02 +0000 https://www.eweek.com/?p=232744 Google’s Gemini AI real-time video and screen-sharing will offer hands-free assistance for creativity, tasks, and shopping.

The post Google Demos Gemini AI Vision Features at MWC 2025: Here’s When You Can Get Them appeared first on eWEEK.

]]>
Google took center stage at Mobile World Congress 2025 with exciting updates to its Gemini AI, unveiling real-time video and screen-sharing features designed to make AI more interactive and practical. These advancements aim to integrate artificial intelligence seamlessly into daily life, from creative assistance to shopping recommendations.

Live Video: AI that sees what you see

The Live Video feature enables users to engage in real-time video conversations with Gemini, using their phone’s camera to ask questions or seek advice based on what the AI “sees.” In a demonstration video, a user showcased their pottery collection to Gemini and requested color suggestions to enhance their existing vases. The AI evaluated the ceramics and recommended an appropriate glaze color, demonstrating its capability to process visual input and deliver contextual, real-time recommendations.

Screen Sharing: AI that understands your screen

The Screen Sharing feature takes Gemini’s abilities a step further, letting users share their phone screen during a conversation. Whether navigating a website, shopping online, or troubleshooting a task, Gemini can now offer on-screen guidance. During a demo, a user asked for fashion advice on a pair of jeans they were considering purchasing, and Gemini helped them select the perfect outfit by analyzing the details on the screen in real time.

While the AI struggled to interpret the style without a verbal description, it successfully adapted to follow-up questions, proving its potential for real-time assistance.

When can you try these new Gemini features?

The new capabilities will roll out later this month for users subscribed to the Google One AI Premium Plan, available on Android devices. Apple users aren’t left out either — Google updated its Gemini app for iPhones, adding lock screen widgets for faster access to AI-powered assistance.

If you’re at MWC 2025, you can try these features firsthand at Google’s Android Avenue exhibit between Halls 2 and 3.

Meanwhile, Lenovo is turning heads at MWC 2025 with its lineup of cutting-edge innovations. The company is showcasing AI-enhanced business laptops, a foldable ThinkBook concept with an expandable screen, and even a solar-powered laptop prototype.

The post Google Demos Gemini AI Vision Features at MWC 2025: Here’s When You Can Get Them appeared first on eWEEK.

]]>
OpenAI Expands Deep Research to All Paying ChatGPT Users https://www.eweek.com/news/openai-expands-deep-research/ Wed, 26 Feb 2025 18:59:04 +0000 https://www.eweek.com/?p=232611 OpenAI expands deep research to more ChatGPT users, boosting query limits and adding features like citations, image embeds, and document processing.

The post OpenAI Expands Deep Research to All Paying ChatGPT Users appeared first on eWEEK.

]]>
OpenAI has officially expanded its deep research tool to all paying ChatGPT users, making advanced research capabilities more accessible beyond the Pro subscription. The tool, designed to generate detailed reports with citations from multiple online sources, was initially restricted to ChatGPT Pro users — those paying $200 per month. As of this week, it is now available  to Plus, Team, Enterprise, and Edu subscribers, OpenAI announced Tuesday on X.

OpenAI’s deep research allows users to generate detailed reports by analyzing and summarizing information from multiple online sources, complete with citations. “Deep research is built for people who do intensive knowledge work in areas like finance, science, policy, and engineering and need thorough, precise, and reliable research,” OpenAI wrote in the deep research release blog.

More users, more research — but with limits

Starting this week, ChatGPT Plus, Team, Enterprise, and Edu subscribers will receive 10 deep research queries per month. Pro users, who previously had a 100-query limit, now have access to 120 queries monthly.

To use the feature, users type a prompt and tap the deep research icon before submitting. Depending on the complexity of the query, it can take ChatGPT anywhere from five to 30 minutes to generate.

Alongside the expanding access, OpenAI has rolled out several improvements to the tool, including:

  • Embedding images alongside citations for better readability.
  • Enhancing the tool’s ability to process uploaded documents.
  • Releasing a System Card outlining how deep research was developed, its capabilities, and the safety measures in place.

According to OpenAI, deep research was trained with input from hundreds of domain experts to ensure accuracy and reliability. The tool has been classified as “medium risk” in OpenAI’s Preparedness Framework.

Will free users get access?

For now, OpenAI says Deep Research is “very compute intensive,” meaning free-tier users won’t have access anytime soon; however, as the tool becomes more efficient, that could change.

OpenAI’s CEO Sam Altman expressed excitement over the rollout, calling deep research “one of my favorite things we have ever shipped.”

The post OpenAI Expands Deep Research to All Paying ChatGPT Users appeared first on eWEEK.

]]>
Rabbit’s New Android AI Agent Shows Promise But Remains a Work in Progress https://www.eweek.com/news/rabbit-ai-agent/ Fri, 21 Feb 2025 19:38:27 +0000 https://www.eweek.com/?p=232548 Rabbit is back with a followup to its R1 personal AI device. The company’s new AI agent controls apps and performs tasks on Android tablets.

The post Rabbit’s New Android AI Agent Shows Promise But Remains a Work in Progress appeared first on eWEEK.

]]>
Tech startup Rabbit is back in the spotlight, but not for the much-hyped R1 ChatGPT personal device that made headlines last year. This time the company is showcasing a new “generalist Android agent” that can control apps and perform tasks on an Android tablet. Engineers demonstrated how the AI agent can handle various tasks in a recent blog post and video by processing text prompts entered on a laptop and executing those commands on an Android tablet.

In the demonstration, the AI successfully:

  • Adjusted system settings by changing app notifications
  • Found and played a YouTube video
  • Added cocktail ingredients from an app to a Google Keep grocery list
  • Generated an AI-powered poem and sent it via WhatsApp
  • Downloaded and played a game
  • Created a revenue plan in Google Docs and shared it with contacts

The Reality: Promising, But Slow and Unpolished

Despite its potential, Rabbit’s AI agent is far from perfect. The demo revealed that it can be slow, taking its time to process tasks that a human could complete in the same (or even less) time. For example, it sent a poem over WhatsApp but did so one line at a time, an odd quirk that suggests the AI still has a lot to learn.

The demo, which uses a laptop to type prompts that control an Android tablet, highlights the potential of Rabbit’s vision: a cross-platform AI system that can act on your behalf. However, it also raises questions about why the company’s $199 R1 device—a wearable AI gadget—wasn’t involved in the demonstration. The R1, which launched early last year, has struggled to live up to its initial promises, leaving many wondering if the company is shifting its focus to software rather than hardware.

What’s Next for Rabbit?

Rabbit’s new Android agent builds on its earlier work with LAM Playground, a web-based AI tool launched last year. The company says this is just the beginning, with plans to roll out a “cross-platform multi-agent system” in the coming weeks. While the tech is cool, it’s hard to ignore that many of the tasks shown in the demo—like adding items to a grocery list or playing a YouTube video—are things most people can do just as quickly (if not faster) on their own.

The public reaction has been mixed. Some are excited about the potential of an AI assistant that can juggle multiple apps and tasks, while others are skeptical about its current limitations. For now, Rabbit’s Android AI agent shows potential, but it still needs refinement before it becomes a truly useful tool.

The post Rabbit’s New Android AI Agent Shows Promise But Remains a Work in Progress appeared first on eWEEK.

]]>
Muse: Microsoft’s Gen AI Model to Help Game Developers – Not Replace Them https://www.eweek.com/news/microsoft-muse-generative-ai-model-gameplay/ Thu, 20 Feb 2025 19:15:18 +0000 https://www.eweek.com/?p=232502 On Wednesday, Microsoft introduced Muse, a generative AI model designed to transform how games are conceptualized, developed, and preserved. Built on the World and Human Action Model (WHAM), Muse can generate game visuals, predict controller inputs, or even combine both to create dynamic gameplay sequences. This innovation, developed in collaboration with Xbox Game Studios’ Ninja […]

The post Muse: Microsoft’s Gen AI Model to Help Game Developers – Not Replace Them appeared first on eWEEK.

]]>
On Wednesday, Microsoft introduced Muse, a generative AI model designed to transform how games are conceptualized, developed, and preserved. Built on the World and Human Action Model (WHAM), Muse can generate game visuals, predict controller inputs, or even combine both to create dynamic gameplay sequences. This innovation, developed in collaboration with Xbox Game Studios’ Ninja Theory, aims to empower game developers and storytellers by offering new tools to enhance their creative processes.

The research behind Muse, published in the international journal Nature, was spearheaded by Microsoft Research’s Game Intelligence and Teachable AI Experiences (Tai X) teams. The model was trained on over one billion images and controller actions from Bleeding Edge, a multiplayer game developed by Ninja Theory. This dataset represents more than seven years of continuous gameplay, giving Muse a deep understanding of 3D game worlds, physics, and player interactions.

How Muse works

Muse’s capabilities are rooted in its ability to generate consistent, diverse, and persistent gameplay sequences. For instance, if given a prompt, Muse can create a two-minute gameplay clip that adheres to the game’s dynamics (consistency), introduces variations (diversity), and maintains key elements throughout (persistency). Early versions of the model struggled with accuracy, but iterative training on advanced GPU clusters, including NVIDIA’s H100s, significantly improved its performance.

One of Muse’s standout features is its potential to revive classic games. By analyzing gameplay data and visuals, Muse could optimize older titles for modern devices, making them accessible to new generations of players. “Countless classic games tied to aging hardware are no longer playable,” said Fatima Kardar, Microsoft’s corporate vice president of gaming AI. “Muse could change how we preserve and experience these games in the future.”

AI in gaming: A tool for developers, not a replacement

Despite Muse’s capabilities, Microsoft insists AI is not meant to replace human game developers. Dom Matthews, head of Ninja Theory, emphasized that Muse is a tool to enhance creativity, not take it over. Still, AI’s role in gaming remains a controversial topic. As game studios continue to embrace AI-driven tools, concerns about automation and job displacement persist. Microsoft has reassured developers that game creators will be at the center of its AI initiatives.

Future applications of Muse

Microsoft is already exploring Muse’s potential in real-time playable AI models and prototyping new gameplay experiences. The company also envisions applications beyond gaming, such as in interior design and architectural modeling, thanks to Muse’s ability to visualize and navigate 3D spaces.

“Beyond gaming, I’m excited by the potential of this capability to enable AI assistants that understand and help visualize things, from reconfiguring the kitchen in your home to redesigning a retail space to building a digital twin of a factory floor to test and explore different scenarios,” Peter Lee, president, Microsoft Research, said in a blog post.

The post Muse: Microsoft’s Gen AI Model to Help Game Developers – Not Replace Them appeared first on eWEEK.

]]>
OpenAI Co-founder Ilya Sutskever’s New AI Startup Hits $30 Billion Valuation – Without a Product https://www.eweek.com/news/ilya-sutskever-ai-startup-ssi-30b-valuation/ Wed, 19 Feb 2025 15:25:54 +0000 https://www.eweek.com/?p=232452 Safe Superintelligence (SSI), founded by Ilya Sutskever, has secured billion-dollar funding with a $30B valuation, focusing solely on AI.

The post OpenAI Co-founder Ilya Sutskever’s New AI Startup Hits $30 Billion Valuation – Without a Product appeared first on eWEEK.

]]>
Ilya Sutskever, the co-founder and former chief scientist of OpenAI, is raising over $1 billion for his AI startup, Safe Superintelligence (SSI), at a more than $30 billion valuation.

Sutskever, who left OpenAI in May 2024 after nearly a decade as its chief scientist, co-founded SSI in June with Daniel Gross, a former AI lead at Apple, and Daniel Levy, an ex-OpenAI researcher. SSI has no product or revenue yet, but investors are betting big on Sutskever’s vision of building a powerful and safe AI system.

According to Bloomberg, the funding round is led by Greenoaks Capital Partners, a San Francisco-based venture capital firm, which has committed $500 million to SSI. Other investors remain undisclosed, but previous backers include prominent names like Sequoia Capital and Andreessen Horowitz. The latest valuation represents a six-fold increase from SSI’s previous worth of $5 billion just months ago.

No products, no revenue — but a clear mission

What makes SSI stand out is its laser-sharp focus on safety. Unlike other AI companies racing to release products, SSI has no plans to sell anything soon. Instead, Sutskever has stated that the company’s first and only product will be “safe superintelligence.”

“This company is special because its first product will be the safe superintelligence, and it will not do anything else until then,” Sutskever told Bloomberg in June. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and being stuck in a competitive rat race.”

This approach has resonated with investors, who seem to trust Sutskever’s vision and track record. As one of the pioneers of neural networks, Sutskever has long been a leading voice in AI research. His work at OpenAI, including co-chairing the “superalignment” team, focused on ensuring AI systems remain aligned with human values.

From OpenAI conflict to a new beginning

However, Sutskever’s journey hasn’t been without controversy. In late 2023, he played a central role in the brief ouster of OpenAI CEO Sam Altman, a move that sparked internal turmoil. Altman was reinstated days later, and Sutskever eventually left the company.

Despite the drama, Sutskever’s new venture is gaining momentum. SSI’s website offers a glimpse into its philosophy: “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.”

The bigger picture: a growing wave of AI startups

While the startup remains shrouded in mystery, its ambitious goals and Sutskever’s reputation have already made it a major player in the AI race. Interestingly, Sutskever isn’t the only OpenAI alum making waves in the AI space. Mira Murati, OpenAI’s former Chief Technology Officer, recently launched her startup, Thinking Machines Lab. The startup has already attracted a team of approximately 30 top researchers and engineers from other leading tech companies, including OpenAI, Meta, and Mistral.

The post OpenAI Co-founder Ilya Sutskever’s New AI Startup Hits $30 Billion Valuation – Without a Product appeared first on eWEEK.

]]>
Adobe Launches Firefly Video Model: A Direct Challenge to OpenAI’s Sora https://www.eweek.com/news/adobe-firefly-video-model-launch/ Thu, 13 Feb 2025 18:59:45 +0000 https://www.eweek.com/?p=232287 Adobe debuts Firefly Video Model in public beta, challenging OpenAI’s Sora with AI-powered video tools for pro filmmakers, emphasizing IP safety and seamless editing.

The post Adobe Launches Firefly Video Model: A Direct Challenge to OpenAI’s Sora appeared first on eWEEK.

]]>
Adobe has officially entered the AI video generation race with the launch of its Firefly Video Model in a public beta. This move marks a significant step in Adobe’s expansion of AI-powered creative tools specifically designed for professional filmmakers and video editors.

Unlike OpenAI’s Sora, which generates entirely AI-driven video clips, Adobe has tailored the Firefly Video Model for professional filmmakers and video editors, particularly those using its Premiere Pro software. Instead of only generating random AI footage, Firefly is designed to enhance or fix existing scenes. It offers features like generating clips to fill gaps in scenes or adding atmospheric effects like snow or fog, making it a practical tool for film and TV studios.

While OpenAI’s Sora can generate videos up to 20 seconds long, Adobe’s Firefly currently produces five-second clips at 1080p resolution.

IP safety and professional use

Adobe’s strong stance on intellectual property is a major differentiator. The Firefly Video Model is trained on licensed and public domain content, ensuring users can confidently use the generated videos in commercial projects without risking copyright infringement.

“We’re the most useful solution because we’re IP-friendly and commercially safe,” said Costin. “You can use our model without worrying about legal risks.”

To reinforce content authenticity, Adobe embeds Content Credentials into all AI-generated videos. This digital certification aligns with the company’s leadership in the Content Authenticity Initiative, which promotes transparency and verification for digital media.

The tool also seamlessly integrates with Adobe’s Creative Cloud suite, including Premiere Pro and Photoshop. It allows users to generate AI clips and then fine-tune them using Adobe’s professional editing tools, such as color matching and atmospheric effects.

Adobe Firefly Video Model pricing and accessibility

Adobe has introduced a tiered pricing model for Firefly Video Model to cater to different user needs:

  • Standard Plan: $9.99/month – 20 video clips
  • Pro Plan: $29.99/month – 70 video clips

In contrast, OpenAI offers 50 videos for $20 per month but at a lower resolution. A premium pricing plan for studios and high-volume users is also under development, with details expected later this year.

Adobe is also rolling out additional AI tools, including Scene to Image, which lets users create 3D references for AI-generated images, and an Audio and Video Translation tool for dubbing content into over 20 languages. These features and the Firefly Video Model are part of Adobe’s broader strategy to dominate the professional AI creative space.

Rising competition in AI Video

Despite its fresh entry, Adobe faces stiff competition from AI video models like Sora from OpenAI and the upcoming second generation of Google’s Veo AI model, both of which have gained attention for their advanced features. Startups like Runway and Pika Labs also push the boundaries of what AI can do in video creation.

Adobe’s strategy focuses on quality over clip length, emphasizing professional-grade results that smoothly integrate into existing workflows. “We think great motion, structure, and definition are more important than longer clips that might be unusable,” Costin explained.

With Firefly Video Model, Adobe signals that AI-generated video is not just for experimentation; it’s a professional-grade tool ready for real-world production.

The post Adobe Launches Firefly Video Model: A Direct Challenge to OpenAI’s Sora appeared first on eWEEK.

]]>
Paris AI Action Summit: Which Tech and Global Leaders Will Attend? https://www.eweek.com/news/paris-ai-action-summit-tech-global-leaders/ Thu, 06 Feb 2025 16:45:24 +0000 https://www.eweek.com/?p=232156 OpenAI, Google, and Microsoft are just three of the tech giants linked to this major AI event.

The post Paris AI Action Summit: Which Tech and Global Leaders Will Attend? appeared first on eWEEK.

]]>
Paris is set to become the epicenter of the global artificial intelligence conversation next week as world leaders, tech giants, and scientists gather for the AI Action Summit. Running from February 6 to 11, this week-long event aims to balance AI innovation, ethical development, and international cooperation.

With nearly 100 countries participating, including the U.S., China, and India, the Artificial Intelligence Action Summit seeks to lay the groundwork for global AI governance. French President Emmanuel Macron, co-hosting alongside India’s Prime Minister Narendra Modi, is keen on positioning France as a leading hub for AI innovation while promoting accessibility and sustainability globally; the country has attracted major AI labs from companies including Google, Meta, and OpenAI to Paris.

Who’s attending the AI Action Summit?

U.S. Vice President JD Vance will represent the American delegation, marking his first international trip since taking office. China’s Vice Premier Ding Xuexiang will also be present. Top CEOs from tech companies including Google, Microsoft, OpenAI, and French AI startup Mistral are slated to attend. Sam Altman, chief executive officer of OpenAI (the creator of ChatGPT), is expected to speak.

From the scientific community, Meta’s AI chief Yann LeCun and Google DeepMind’s Demis Hassabis will join Nobel laureates and other experts to discuss AI’s impact on work, health, and sustainability.

Elon Musk’s presence remains unconfirmed, as does that of Liang Wenfeng, founder of Chinese AI firm DeepSeek, which made waves last week with its cost-effective, high-performance AI model DeepSeek-R1.

What’s on the agenda of the AI Action Summit?

Unlike previous AI summits in the U.K. and South Korea, which focused heavily on safety, the Paris AI Summit is structured around five core themes:

  • Public interest in AI.
  • The future of work.
  • Innovation and culture.
  • Trust in AI.
  • Global AI governance.

Each theme addresses critical aspects of AI’s impact on society and its future development. These themes reflect the summit’s broader goal of fostering collaboration, inclusivity, and ethical innovation in AI.

What are expected outcomes from the AI Action Summit?

One of the anticipated outcomes of the summit is a non-binding communiqué outlining principles for the responsible development and use of AI. Additionally, there is an emphasis on distributing AI benefits to developing nations and securing funding for public-interest AI projects. France plans to leverage its clean energy resources to align AI advancement with climate goals.

The summit is also expected to result in significant investments, as philanthropies and businesses are anticipated to commit an initial $500 million in capital and up to $2.5 billion over the next five years to fund AI development projects.

The post Paris AI Action Summit: Which Tech and Global Leaders Will Attend? appeared first on eWEEK.

]]>
Vatican Statement on AI Warns of “Instruments of War” https://www.eweek.com/news/vatican-statement-on-ai/ Wed, 29 Jan 2025 20:17:29 +0000 https://www.eweek.com/?p=232087 The Vatican has stepped into the global conversation on artificial intelligence with a sweeping new document that addresses the ethical challenges and opportunities posed by the fast-evolving technology. Released on Tuesday, the document, titled “Antiqua et Nova,” offers guidelines for the use of AI in areas ranging from warfare and healthcare to education and the […]

The post Vatican Statement on AI Warns of “Instruments of War” appeared first on eWEEK.

]]>
The Vatican has stepped into the global conversation on artificial intelligence with a sweeping new document that addresses the ethical challenges and opportunities posed by the fast-evolving technology. Released on Tuesday, the document, titled “Antiqua et Nova,” offers guidelines for the use of AI in areas ranging from warfare and healthcare to education and the environment. At its core, the Vatican stresses that AI should complement, not replace, human intelligence and dignity.

Pope Francis, who has repeatedly warned about the risks of unchecked AI development, has made this document a cornerstone of his call for ethical responsibility in technology. The release comes at a pivotal moment, as advancements like the Chinese AI chatbot DeepSeek challenge the dominance of U.S. tech giants, raising questions about the global race for AI supremacy.

AI and Warfare: A Call for Caution

One of the most striking sections of the document addresses the use of AI in warfare. The Vatican warns that autonomous weapons systems, which can identify and strike targets without human intervention, pose a grave threat to humanity. “No machine should ever choose to take the life of a human being,” the document said, stressing that removing human moral judgment from warfare could lead to a destabilizing arms race with catastrophic consequences.

“Lethal Autonomous Weapon Systems, which are capable of identifying and striking targets without direct human intervention, are a cause for grave ethical concern because they lack the unique human capacity for moral judgment and ethical decision-making,” it said.

AI and Human Relationships: No Substitute for Empathy

The Vatican also cautions against the over-reliance on AI in personal relationships, particularly in areas like child development and interpersonal connections. While AI can simulate empathy, it cannot replicate the depth of authentic human relationships. “AI can only simulate relationships,” the document noted, “but human beings are meant to experience them genuinely.”

Misinformation, Healthcare, and Education

The Vatican also addresses the dangers of AI-generated misinformation and deepfakes, urging individuals to verify the truth of what they share online. “Countering AI-driven falsehoods requires the efforts of all people of goodwill,” the document said. In addition, it recognized AI’s potential to improve medical diagnostics but stresses that it must not replace the human connection between patients and healthcare providers. 

“Decisions regarding patient treatment and the weight of responsibility they entail must always remain with the human person and should never be delegated to AI,” it asserts.

In education, the Vatican urges that AI be used to foster critical thinking rather than simply training students to amass information. “Education is not about filling one’s head with ideas but about engaging the mind, heart, and hands,” it said, calling on schools and universities to address the ethical dimensions of technology. “An essential part of education is forming the intellect to reason well in all matters, to reach out towards truth, and to grasp it, while helping the “language of the head to grow harmoniously with the language of the heart and the language of the hands.”

AI Accountability

The Vatican’s document underscores the need for accountability in AI development, warning against the concentration of power in the hands of a few tech giants. “AI should not be seen as an artificial form of human intelligence but as a product of it,” it said, calling for a moral evaluation of how and when AI technologies are used.

Bishop Paul Tighe, secretary of the Vatican’s Dicastery for Culture and Education, described the document as a balanced approach that neither embraces apocalyptic fears nor uncritically celebrates AI. “It’s trying to see the potential and celebrate the extraordinary achievement that AI is,” he said, emphasizing humanity’s God-given capacity to innovate responsibly.

The post Vatican Statement on AI Warns of “Instruments of War” appeared first on eWEEK.

]]>