Sunny Yadav, Author at eWEEK https://www.eweek.com/author/syadav/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Wed, 05 Mar 2025 20:02:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 OpenAI’s $50M Bet on the Future: An AI Revolution https://www.eweek.com/news/openai-nextgenai/ Wed, 05 Mar 2025 16:45:49 +0000 https://www.eweek.com/?p=232769 NextGenAI by OpenAI is a $50M, 15-institution effort set to transform AI innovation, from groundbreaking research to next-level educational initiatives.

The post OpenAI’s $50M Bet on the Future: An AI Revolution appeared first on eWEEK.

]]>
OpenAI has unveiled its bold new initiative, NextGenAI — a $50 million project designed to revolutionize the world of artificial intelligence. In partnership with 15 leading research institutions, NextGenAI aims to drive scientific breakthroughs and enhance educational opportunities by using advanced AI tools and resources.

Accelerating AI research innovations

NextGenAI is set to transform research across diverse fields. Institutions like Caltech, Duke University, Harvard, MIT, and Oxford are among the pioneering partners. These esteemed organizations are using AI to tackle complex challenges in digital health, advanced therapeutics, energy, mobility, agriculture, and beyond.

By integrating cutting-edge AI models into research pipelines, the initiative promises to accelerate discoveries and provide vital insights into pressing scientific problems. OpenAI’s investment in NextGenAI reflects its commitment to fostering collaborations that bridge academia and industry, ensuring that innovations benefit society at large.

Empowering education and AI literacy

Beyond its research ambitions, NextGenAI is dedicated to advancing education. Leading academic institutions are incorporating AI tools into their curricula, offering students hands-on experiences that cultivate critical skills in AI literacy.

Programs such as the Generative AI Literacy Initiative at Texas A&M, along with enhanced training opportunities at MIT and Howard University, are equipping future leaders with the knowledge needed to responsibly harness AI’s potential. This focus on education ensures that the next generation is well-prepared to navigate and shape a future where artificial intelligence plays a central role in every industry.

Enhancing collaboration and accessibility

NextGenAI is also redefining how academic communities interact with historical resources. Projects at institutions like the Boston Public Library and Oxford’s Bodleian Library are utilizing AI to digitize and transcribe rare manuscripts, making centuries-old texts accessible to scholars worldwide. This innovative application of AI not only preserves cultural heritage but also democratizes access to information, fostering a more inclusive and collaborative research environment that bridges past and future.

With robust funding, strategic partnerships, and a dual focus on research and education, NextGenAI is poised to drive transformative change across multiple sectors. As collaborations deepen and AI continues to evolve, this initiative could serve as a blueprint for future endeavors in technology and academia. Looking ahead, the initiative is expected to inspire further investments in both academic research and practical applications, creating a ripple effect that will redefine global standards for innovation, collaboration, and technological advancement.

Explore our list of top AI companies dominating the AI landscape and stay up to date with the latest advancements in the field.

The post OpenAI’s $50M Bet on the Future: An AI Revolution appeared first on eWEEK.

]]>
Anthropic’s Good Fortune: $61.5B Valuation, Amazon Endorsement & Claude Innovations https://www.eweek.com/news/anthropic-valuation-series-e-funding/ Tue, 04 Mar 2025 18:07:47 +0000 https://www.eweek.com/?p=232748 Anthropic secures $3.5B in Series E funding at a $61.5B valuation, launching breakthrough AI models like Claude 3.7 Sonnet and Claude Code to drive global expansion.

The post Anthropic’s Good Fortune: $61.5B Valuation, Amazon Endorsement & Claude Innovations appeared first on eWEEK.

]]>
In a landmark move for the AI industry, Anthropic has secured $3.5 billion in its Series E funding round, elevating its post-money valuation to $61.5 billion. The announcement underscores the company’s rising influence in the competitive field of artificial intelligence.

With backing from renowned investors including Lightspeed Venture Partners, Bessemer Venture Partners, and a notable endorsement from Amazon, Anthropic is set to accelerate the development of its next-generation AI systems. The latest round comes on the heels of recent product launches that highlight the company’s dedication to advancing AI capabilities while upholding rigorous safety and alignment standards.

Expanding AI horizons with Claude innovations

This new capital injection coincides with the rollout of Anthropic’s groundbreaking updates to its flagship AI models. The introduction of Claude 3.7 Sonnet and Claude Code has pushed the boundaries of what AI can achieve, particularly in coding and collaborative problem-solving. Early reports indicate that these innovations are already transforming workflows for diverse clients ranging from nimble startups to global corporations such as Zoom, Snowflake, and Pfizer.

Integration efforts have seen platforms like Replit harnessing Claude’s natural language processing to convert text into executable code, thereby streamlining software development and operational efficiency. With the funds raised, Anthropic plans to deepen research into mechanistic interpretability and AI alignment, ensuring that its solutions not only perform exceptionally but also operate with a high degree of reliability and ethical oversight.

Global ambitions and strategic expansion

The Series E round is expected to power Anthropic’s ambitious plans for global expansion. The additional resources will enable the company to scale its computing capacity and extend its market reach into new international territories.

Industry giants and emerging enterprises seek robust AI partnerships, and Anthropic’s strategic focus is on developing collaborative and secure AI models, which positions it favorably in the digital landscape. Investors remain optimistic that these advancements will catalyze further breakthroughs in AI technology, fostering innovation that can address complex challenges across sectors.

As Anthropic continues to push the frontiers of AI research and product development, its commitment to ethical and effective innovation is poised to redefine industry standards and drive transformative change worldwide.

Explore our list of top AI companies dominating the AI landscape and stay up to date with the latest advancements in the field.

The post Anthropic’s Good Fortune: $61.5B Valuation, Amazon Endorsement & Claude Innovations appeared first on eWEEK.

]]>
Apple’s AI Push: Siri Overhaul and Rumored Hardware Enhancements Signal New Era in PCs https://www.eweek.com/news/apple-ai-siri-macbook-air-ipad-air/ Mon, 03 Mar 2025 18:21:03 +0000 https://www.eweek.com/?p=232719 Apple is reinventing Siri with bold AI upgrades, as rumors swirl around the new M4 MacBook Air and iPad Air 2025.

The post Apple’s AI Push: Siri Overhaul and Rumored Hardware Enhancements Signal New Era in PCs appeared first on eWEEK.

]]>
Apple is doubling down on artificial intelligence, with a renewed focus on Siri set to challenge Amazon’s Alexa. A Bloomberg report highlights Apple’s intensified investment in digital assistants, fueling speculation about AI-driven upgrades in upcoming hardware. Insiders suggest that Apple’s upcoming M4 MacBook Air and iPad Air 2025 models will introduce major AI enhancements, reinforcing Apple’s strategic pivot toward a more intelligent and integrated user experience.

Siri’s AI makeover: A competitive leap?

Industry analysts point out that Apple is investing heavily in refining Siri’s contextual understanding, responsiveness, and integration across its ecosystem. This comes at a time when voice-controlled AI services are not only a convenience but a competitive battleground for tech giants.

These improvements are more than cosmetic tweaks — they represent a strategic effort to elevate Siri into a tool that can rival and perhaps even surpass Alexa in several key areas. Apple’s commitment to privacy and seamless integration with its proprietary hardware positions Siri uniquely. By using in-house chip technology and proprietary software advancements, Apple aims to provide users with a more intuitive and secure interaction model. This AI-centric upgrade is expected to enhance productivity and expand digital connectivity across devices.

Anticipation builds about the M4 MacBook Air and iPad Air 2025

Alongside the Siri overhaul, rumors surrounding the M4 MacBook Air and the forthcoming iPad Air 2025 have intensified market speculation. These devices are rumored to harness advanced generative AI features that improve performance and personalize user interactions. Enhanced machine learning algorithms could tailor system responses based on individual usage patterns, suggesting a future where hardware and software are even more intricately linked.

Redefining users’ expectations

As the tech community awaits official announcements, the convergence of refined AI capabilities and cutting-edge hardware could redefine user expectations. Apple’s reinvention of Siri, together with the launch of the anticipated devices, is set to usher in a new era in personal computing — one where artificial intelligence serves as both a personal assistant and a gateway to a seamlessly connected digital ecosystem. With strategic investments in AI development and hardware innovation, Apple appears poised to lead the next wave of AI-powered technology.

Explore our list of top AI companies dominating the AI landscape and stay up to date with the latest advancements in the field.

The post Apple’s AI Push: Siri Overhaul and Rumored Hardware Enhancements Signal New Era in PCs appeared first on eWEEK.

]]>
Microsoft Cracks Down on Global Cybercrime Network Exploiting Generative AI https://www.eweek.com/news/microsoft-azure-openai-service-cybercrime-generative-ai/ Fri, 28 Feb 2025 19:00:01 +0000 https://www.eweek.com/?p=232675 Microsoft targets a global cybercrime network abusing generative AI, seizing key infrastructure and disrupting hackers misusing Azure OpenAI Service.

The post Microsoft Cracks Down on Global Cybercrime Network Exploiting Generative AI appeared first on eWEEK.

]]>
Microsoft has launched a sweeping legal initiative to dismantle a global hacking network that exploited generative AI, the company announced in an official blog post. The hackers bypassed AI safety measures to infiltrate its Azure OpenAI Service, raising alarms over the growing misuse of advanced technologies.

Unmasking the cybercriminals behind generative AI abuse

According to Microsoft’s official blog, the company’s Digital Crimes Unit has identified the culprits behind what it describes as “Storm-2139,” a cybercrime network orchestrating the abuse of generative AI. The network, which spans multiple countries, includes individuals operating under aliases such as “Fiz,” “Drago,” “cg-dot,” and “Asakuri.”

Based on court filings, these actors exploited publicly available customer credentials to illegally access Microsoft’s AI services, manipulate their capabilities, and resell modified access to other bad actors. This nefarious scheme enabled the generation of harmful content, including non-consensual and sexually explicit imagery, in clear violation of Microsoft’s policies.

Bloomberg reported that Microsoft has publicly exposed the identities and methodologies of these hackers, revealing the extent of their operations and the vulnerabilities they exploited. The revelations not only underscore the severity of the threat but also serve as a stern warning to other malicious actors who might be tempted to undermine the guardrails designed to keep AI use safe and ethical.

Legal measures and industry implications

In a legal filing, Microsoft has named the primary developers behind the criminal tools in an amended complaint filed in the U.S. District Court for the Eastern District of Virginia. The company’s initiative has already yielded results, with a temporary restraining order and preliminary injunction leading to the seizure of a critical website used by the network.

This measure effectively disrupted the operations of Storm-2139 and demonstrated Microsoft’s commitment to protecting its technology and its users from exploitation. Microsoft is now preparing referrals to U.S. and international law enforcement agencies to further pursue legal action against these actors.

Industry experts warn that the ramifications of this crackdown extend far beyond the immediate disruption of cybercriminal activities. As generative AI models become increasingly embedded in everyday applications, ensuring their responsible use is critical. Microsoft’s legal action serves as a precedent for the tech industry, emphasizing that stronger regulatory and technical safeguards are necessary to prevent emerging technologies from being misused.

As generative AI rapidly enters the mainstream, ethical issues have come to the forefront. Explore our guide on generative AI ethics to ensure you’re on the right side of the issue.

The post Microsoft Cracks Down on Global Cybercrime Network Exploiting Generative AI appeared first on eWEEK.

]]>
GibberLink: Breakthrough in How Voice Assistants Communicate AI-to-AI https://www.eweek.com/news/gibberlink-new-ai-language/ Wed, 26 Feb 2025 20:02:08 +0000 https://www.eweek.com/?p=232622 GibberLink, which debuted at the ElevenLabs Hackathon, is a protocol for AI voice assistants to swap data via modulated sound, boosting efficiency and cutting costs.

The post GibberLink: Breakthrough in How Voice Assistants Communicate AI-to-AI appeared first on eWEEK.

]]>
In a groundbreaking demonstration at the ElevenLabs London Hackathon, developers unveiled GibberLink, a novel protocol that enables AI voice assistants to communicate in a language optimized for machines rather than humans. By switching from traditional human-like speech to a hyper efficient, sound-based data transmission, GibberLink promises to reduce unnecessary computational load and pave the way for faster, error-proof interactions.

The birth of GibberLink

Developed by Boris Starkov and Anton Pidkuiko during the hackathon, GibberLink emerged from a simple yet revolutionary idea: If AI agents are handling routine tasks like booking a hotel or managing customer service, why waste resources replicating the inefficiencies of human conversation?

Using ElevenLabs’ cutting-edge conversational AI technology combined with the open-source ggwave library, the developers created a system where AI agents recognize when they’re speaking to another machine. Once this recognition kicks in, the agents instantly switch from natural language to a protocol that uses modulated sound waves — a method reminiscent of dial-up modem signals from the 1980s.

In the demo, one AI agent, acting as a customer seeking a hotel room for a wedding, began a conversation in human speech. Midway through the exchange, upon detecting that its conversation partner was also an AI, both agents transitioned to GibberLink mode. The communication then shifted to a series of structured audio tones that efficiently conveyed data without the overhead of generating natural language.

This change reduced the computational resources typically required for speech synthesis and promised a more error-resistant communication channel.

GibberLink’s efficiency and future implications of this AI innovation

The advantages of GibberLink extend far beyond novelty. By eliminating the need for human-like speech in AI-to-AI interactions, the protocol can significantly reduce compute costs, energy consumption, and latency. In an era where virtual assistants manage both inbound and outbound communications, this breakthrough could lead to substantial operational efficiencies.

Influential tech figures and major publications and prominent social media influencers like Marques Brownlee, have spotlighted the innovation, emphasizing its potential to reshape the dynamics of digital communication.

GibberLink’s open-source release on GitHub invites developers worldwide to explore, refine, and possibly integrate this technology into a range of applications — from smarter customer service bots to fully autonomous systems coordinating in real time. As artificial intelligence becomes an even more integral part of everyday technology, the ability for machines to “speak” in their own optimized language may soon become standard practice.

Explore our list of top AI companies dominating the AI landscape and stay up to date with the latest advancements in the field.

The post GibberLink: Breakthrough in How Voice Assistants Communicate AI-to-AI appeared first on eWEEK.

]]>
Anthropic’s Claude 3.7 Sonnet and Claude Code Set New AI Standard https://www.eweek.com/news/anthropic-claude-3-7-sonnet-claude-code/ Tue, 25 Feb 2025 18:12:31 +0000 https://www.eweek.com/?p=232582 Anthropic’s customizable AI models Claude 3.7 Sonnet and Claude Code empower users to dictate reasoning depth for creative and technical tasks.

The post Anthropic’s Claude 3.7 Sonnet and Claude Code Set New AI Standard appeared first on eWEEK.

]]>
Anthropic has taken a bold step forward in the AI arena with the launch of its latest models: Claude 3.7 Sonnet and Claude Code. These innovative tools are designed to empower users by allowing them to control the depth of AI reasoning — a feature that sets a new industry standard. The introduction of these models comes as the company seeks to differentiate itself from competitors like OpenAI and DeepSeek, heralding a new era in customizable artificial intelligence.

Enhanced user customization

A standout feature of Claude 3.7 Sonnet is its unprecedented user-directed reasoning control. Users can now decide how much the AI should “think through” its responses. This tailored approach optimizes performance for a range of tasks — from creative writing to complex problem-solving — and addresses longstanding concerns over opaque AI decision-making processes. By allowing users the ability to modulate reasoning depth, Anthropic enhances both the transparency and efficiency of its AI models.

The launch of Claude Code extends this customization into coding and technical tasks. With capabilities designed to support developers and technical professionals, Claude Code refines code generation and debugging functionalities. This dual-model strategy diversifies Anthropic’s offerings and reinforces its commitment to meeting the varied needs of a rapidly evolving market.

Anthropic’s strategic move in AI

The timing of this launch is notable. With fierce competition in the AI sector, Anthropic’s decision to integrate customizable reasoning options positions it at the forefront of the next big battle in artificial intelligence. Industry analysts have pointed out that letting users determine how much the model reasons could be a game-changer, potentially redefining user expectations and setting a new benchmark for future AI developments.

As the market watches closely, the introduction of Claude 3.7 Sonnet and Claude Code could signal a broader trend toward user empowerment in AI design. By focusing on flexibility and control, Anthropic is responding to current technological challenges, paving the way for more adaptable and responsible AI systems.

Pricing remains unchanged

Despite these significant advancements, Anthropic has maintained a consistent pricing structure. Users continue to pay $3 per million input tokens and $15 per million output tokens, with thinking tokens included in the output cost.

Explore our list of top generative AI companies dominating the AI landscape and developing new applications for the technology.

The post Anthropic’s Claude 3.7 Sonnet and Claude Code Set New AI Standard appeared first on eWEEK.

]]>
Salesforce is Not For Sale: CEO Marc Benioff Debunks Billion-Dollar Acquisition Rumors https://www.eweek.com/news/benioff-denies-billion-dollar-deal/ Fri, 21 Feb 2025 19:10:25 +0000 https://www.eweek.com/?p=232545 Salesforce CEO Marc Benioff denies rumors of a $1B acquisition, emphasizing the company’s commitment to building AI-driven solutions through internal innovation.

The post Salesforce is Not For Sale: CEO Marc Benioff Debunks Billion-Dollar Acquisition Rumors appeared first on eWEEK.

]]>
Salesforce CEO Marc Benioff is once again at the center of industry chatter as the company positions itself for an AI-driven future. In a series of high-profile statements, including a recent social media update and a subsequent denial reported by Yahoo Finance, Benioff has addressed swirling rumors linking the tech giant to a potential billion-dollar acquisition. The remarks come as part of a broader strategy to harness artificial intelligence internally, steering clear of expensive external buyouts while focusing on sustainable innovation.

A bold vision for AI innovation

Salesforce is doubling its commitment to AI. Rather than diverting resources to large-scale acquisitions, the company is investing heavily in refining its existing platforms to integrate advanced AI capabilities. According to Benioff’s recent tweet, the focus remains on building robust, in-house solutions that can offer personalized customer experiences and streamlined business operations.

This approach underscores a strategic pivot: By channeling funds into research and development, Salesforce aims to lead the market in AI innovation without the complications and risks that come with multi-billion-dollar deals.

Industry analysts are applauding this strategy as it promises agility and long-term stability. By fostering an environment of continuous improvement, Salesforce is enhancing its software suite and positioning itself as a trailblazer in the competitive race toward AI supremacy. The move is emblematic of a broader trend where major tech players are prioritizing organic growth and technological prowess over costly mergers and acquisitions.

Debunking the billion-dollar rumor

Speculation had been rife following reports of a potential $1 billion acquisition aimed at boosting the company’s AI capabilities. However, Benioff categorically denied any plans for such an acquisition, emphasizing that Salesforce’s growth strategy is firmly anchored in innovation from within.

“Our vision is to innovate and evolve our own products rather than engaging in expensive external deals,” he asserted, clarifying that the focus is on organic development. This clear message comes at a critical juncture as investors and market observers seek to understand how tech companies are navigating the AI revolution. By staying the course and investing internally, Salesforce is reinforcing its commitment to delivering cutting-edge solutions that cater to a rapidly evolving digital marketplace.

Artificial intelligence is reshaping industries, and Salesforce’s strategy of organic growth and internal innovation is a testament to its long-term vision. As the company continues to expand its AI capabilities, all eyes remain on Benioff’s leadership and the evolving narrative of sustainable, technology-driven progress.

Explore our list of top generative AI companies dominating the AI landscape and developing new applications for the technology.

The post Salesforce is Not For Sale: CEO Marc Benioff Debunks Billion-Dollar Acquisition Rumors appeared first on eWEEK.

]]>
EU Faces Backlash Over AI Act Copyright Loophole https://www.eweek.com/news/eu-backlash-over-ai-act/ Thu, 20 Feb 2025 20:32:40 +0000 https://www.eweek.com/?p=232514 EU lawmakers face mounting criticism as a copyright loophole in the AI Act threatens creative works, sparking debates over tech innovation and legal protections.

The post EU Faces Backlash Over AI Act Copyright Loophole appeared first on eWEEK.

]]>
The European Union finds itself at the center of a fierce debate over artificial intelligence regulation. The newly proposed AI Act, hailed by some as a pioneering framework to shape the future of technology, is now under fire amid accusations that it contains a major copyright loophole.

Critics warn that the ambiguous provisions could allow companies to use copyrighted material in training generative AI models without proper compensation, potentially devaluing creative works and undermining intellectual property rights. Concerns over the AI Act’s potential impact on intellectual property rights have quickly escalated.

Critics raise alarm

Opponents of the AI Act have quickly highlighted concerns regarding the legislation’s text and data mining exceptions. Legal experts and copyright advocates argue that the loophole is far from a minor oversight — it represents a fundamental flaw that may be exploited by large tech firms.

“The Act’s provisions inadvertently open the door for companies to bypass essential copyright safeguards,” commented one industry lawyer, reflecting the sentiments echoed across various stakeholder meetings and public forums.

Content creators fear that this regulatory gap could lead to a surge in unauthorized use of their works, diluting the incentives for original creation. As the debate heats up, advocacy groups are calling on EU lawmakers to amend the legislation, insisting on clearer guidelines that would enforce fair remuneration and preserve the rights of content owners.

Industry experts warn of unintended consequences

Meanwhile, industry analysts caution that an overly permissive environment could have far-reaching consequences. The potential for tech companies to leverage vast amounts of protected content without compensation may accelerate innovation in AI, yet it simultaneously sets the stage for complex legal battles.

“We are at a crossroads where balancing technological progress and intellectual property protection is paramount,” noted a senior analyst in a recent panel discussion. This concern is not isolated within the EU; international observers are closely monitoring the developments, aware that the outcome could set a precedent for global AI data governance. The controversy underscores a broader tension: how to foster innovation without compromising the rights and rewards of creative contributors.

As discussions intensify, the EU is under mounting pressure to revisit and refine the AI Act. Lawmakers are expected to engage in a series of high-stakes debates aimed at reconciling the dual imperatives of technological advancement and copyright protection. With both industry leaders and content creators watching closely, the coming weeks could shape the future of AI regulation and determine whether innovation and intellectual property rights can truly coexist.

Learn about AI policy and governance in detail to understand how to use the technology responsibly and how it can impact your business.

The post EU Faces Backlash Over AI Act Copyright Loophole appeared first on eWEEK.

]]>
South Korea’s Data Center Power Play: 3 Gigawatts for an AI Revolution https://www.eweek.com/news/south-korea-ai-data-center/ Wed, 19 Feb 2025 14:47:56 +0000 https://www.eweek.com/?p=232440 South Korea unveils plans for a 3-gigawatt AI data center that fuses cutting-edge tech with renewable energy, propelling digital innovation and growth.

The post South Korea’s Data Center Power Play: 3 Gigawatts for an AI Revolution appeared first on eWEEK.

]]>
South Korea is all set to revolutionize its digital landscape with plans for an AI data center designed to deliver up to 3 gigawatts of power. The ambitious project is not just about creating a new facility — it signals a broader national strategy to secure leadership in artificial intelligence and data processing. With this initiative, South Korea aims to harness advanced computational capabilities, drive innovation, and establish itself as a powerhouse in the global tech arena.

Ambitious infrastructure for AI expansion

The envisioned data center represents one of the most significant infrastructure projects in artificial intelligence. Designed to support the computational demands of emerging AI applications, the facility is expected to offer an energy capacity of up to 3 gigawatts — an amount that could power high-performance computing systems and large-scale data processing operations.

By investing in such a colossal infrastructure, the country is laying the groundwork for a future where AI-driven technologies enhance everything from autonomous vehicles and smart cities to advanced robotics and healthcare systems. The data center is anticipated to attract both domestic and international tech firms, fostering innovation and potentially creating thousands of high-skilled jobs.

The Wall Street Journal was the first to report on this initiative, underscoring its potential to reshape South Korea’s role in the global AI industry.

Powering the future with renewable energy

An equally critical aspect of this initiative is its focus on sustainable energy. With the facility’s enormous energy needs, there is growing attention on how to power it responsibly. Industry experts suggest that integrating renewable energy sources could be key to mitigating environmental impacts. South Korea’s push towards greener energy solutions may well see this data center incorporating solar, wind, or even next-generation battery storage technologies to ensure a steady, eco-friendly power supply.

This approach will help reduce carbon emissions and position the nation as a leader in merging high-tech advancements with sustainable practices. Moreover, the project could serve as a model for future developments, emphasizing that high computational power and environmental responsibility can go hand in hand.

The proposed AI data center in South Korea is more than just a facility — it is a symbol of the country’s dedication to leading the next wave of technological innovation. As South Korea continues to invest in its future, the world watches closely, recognizing that such bold initiatives could very well shape the global AI landscape for decades to come.

Explore our list of top AI companies dominating the AI landscape and stay up to date with the latest advancements in AI.

The post South Korea’s Data Center Power Play: 3 Gigawatts for an AI Revolution appeared first on eWEEK.

]]>
Ex-Google CEO Warns of AI Dangers & Calls for Global Oversight https://www.eweek.com/news/ex-google-ceo-eric-schmidt-ai-warning/ Thu, 13 Feb 2025 22:04:13 +0000 https://www.eweek.com/?p=232309 Eric Schmidt warns that rogue states and terrorists may exploit AI to pose an extreme risk to global security.

The post Ex-Google CEO Warns of AI Dangers & Calls for Global Oversight appeared first on eWEEK.

]]>
Former Google CEO Eric Schmidt has warned that AI could soon be harnessed by rogue states like North Korea, Iran, and Russia or even by terrorists to inflict harm on innocent people. Speaking on BBC Radio 4’s Today program from Paris, Schmidt outlined a grim scenario where advanced AI technologies enable those with malevolent intent to develop weapons, including potentially launching biological attacks.

“Think about North Korea, or Iran, or even Russia,” he cautioned. “This technology is fast enough for them to adopt that they could misuse it and do real harm.” He also referenced an “Osama bin Laden scenario,” warning of a truly evil actor using artificial intelligence to disrupt modern life.

Terrorist threats and rogue state risks

Schmidt’s remarks come amid growing concerns over AI’s double-edged promise. While AI continues to drive innovation, it also poses unprecedented security risks. The former Google executive warned that terrorist organizations could adopt or misuse AI, amplifying their capacity for harm.

He highlighted the potential for AI-powered tools to be used in orchestrating cyberattacks, deploying autonomous weapon systems, or even engineering biological threats. These warnings underscore the urgent need for a robust, coordinated response to prevent AI from becoming a tool for mass disruption.

Do we need more robust regulation and global oversight of AI?

In tandem with his cautionary message, Schmidt supported measures such as the U.S. export controls introduced by former President Joe Biden that restrict the sale of advanced AI microchips to 18 countries to slow adversaries’ progress. While he stressed the necessity of government oversight on private tech companies developing AI models, he also warned that over-regulation could stifle innovation.

“It’s really important that governments understand what we’re doing and keep their eye on us,” he said, emphasizing that tech leaders might make different value judgments than governments.

Schmidt’s comments coincided with the recent Paris AI Summit, where world leaders, CEOs, and policy experts debated the future of AI governance. The summit, which saw mixed responses from countries like the U.K. and the U.S. regarding an international AI agreement, highlighted the fine balance between fostering innovation and ensuring national security. 

This warning from one of tech’s most influential figures marks a critical juncture in the ongoing debate over AI governance — a call for immediate, coordinated global action to address risks that could threaten both security and society at large.

Learn more about AI policies and governance to understand why they are essential for organizations to ensure the responsible use of AI technology.

The post Ex-Google CEO Warns of AI Dangers & Calls for Global Oversight appeared first on eWEEK.

]]>