Chris Bernard, Author at eWEEK https://www.eweek.com/author/cbernard/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Mon, 03 Mar 2025 22:54:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Microsoft’s New Dragon AI Copilot Fights Healthcare Paperwork with Fire https://www.eweek.com/news/microsoft-dragon-ai-copilot/ Mon, 03 Mar 2025 22:54:05 +0000 https://www.eweek.com/?p=232733 Microsoft’s new Dragon Copilot helps doctors with paperwork. It uses Nuance AI to automate documentation and free up time. Explore the benefits now!

The post Microsoft’s New Dragon AI Copilot Fights Healthcare Paperwork with Fire appeared first on eWEEK.

]]>
Microsoft is launching its new Dragon Copilot advanced voice assistant to help healthcare professionals with the burden of paperwork, long a point of frustration. Studies show that doctors spend nearly twice as much time on administrative work as they do seeing patients, contributing to burnout and inefficiencies in healthcare delivery. Microsoft’s solution aims to reverse that trend by automating documentation and enabling hands-free access to critical information.

Built on technology from Nuance—acquired by Microsoft in 2021—the new AI tool integrates natural language processing and automation to streamline clinical documentation, retrieve information, and handle routine tasks.

“At Microsoft, we have long believed that AI has the incredible potential to free clinicians from much of the administrative burden in healthcare and enable them to refocus on taking care of patients,” Corporate Vice President of Microsoft Health and Life Sciences Solutions and Platforms Joe Petro said in a statement.

Read more: Gen AI in healthcare

What Does AI-Powered Healthcare Look Like?

More than just a dictation tool, Dragon Copilot combines the capabilities of Nuance’s Dragon Medical One and DAX Copilot for a range of AI in healthcare use cases. According to Microsoft, the AI assistant can:

  • Generate clinical notes in real-time
  • Process conversational orders
  • Draft referral letters or after-visit summaries.

It also offers access to trusted medical resources, allowing clinicians to quickly retrieve relevant information without breaking their workflows.

The AI’s potential impact extends beyond convenience. Reducing the time spent on manual data entry allows physicians to dedicate more attention to patients, leading to improved quality of care and job satisfaction. Early adopters of similar AI-driven tools have seen reduced documentation time, leading to lower stress levels among healthcare workers.

Doctor/Patient… and Dragon Confidentiality?

Dragon Copilot also has the potential to change how doctors and patients interact. The AI can passively capture key details during consultations, creating a structured record while freeing the doctor up to engage in the conversation. This could lead to more natural discussions between clinicians and patients as well as shorter wait times and more personalized care.

However, questions around data security, reliability, and potential biases remain. Microsoft emphasizes that Dragon Copilot is built on a secure, HIPAA-compliant framework, and that it continuously learns from user interactions to refine its accuracy. Still, widespread adoption will depend on trust from both medical professionals and patients.

Dragon Copilot will be available in the U.S. and Canada starting in May, with plans to expand to Europe later in the year.

The post Microsoft’s New Dragon AI Copilot Fights Healthcare Paperwork with Fire appeared first on eWEEK.

]]>
Microsoft’s Brad Smith Criticizes US AI Diffusion Rule: “Insufficient Supply” Will Drive Business to Chinese Vendors https://www.eweek.com/news/microsoft-president-brad-smith-criticizes-ai-diffusion-rule/ Thu, 27 Feb 2025 21:17:33 +0000 https://www.eweek.com/?p=232650 Microsoft's Brad Smith warns Biden's AI rule could hurt U.S. AI leadership. It might boost China's AI sector. Read more on the AI export debate now.

The post Microsoft’s Brad Smith Criticizes US AI Diffusion Rule: “Insufficient Supply” Will Drive Business to Chinese Vendors appeared first on eWEEK.

]]>
In a blog post published today, Vice Chair and President of Microsoft Brad Smith expressed concerns over the Biden administration’s interim final AI Diffusion Rule, cautioning that it could inadvertently hinder U.S. leadership in artificial intelligence (AI) and benefit China’s AI sector. Smith argued that the rule, which limits the export of advanced AI components to certain countries, could weaken relationships with key allies and stifle economic growth in these regions.

What is the AI Diffusion Rule?

Introduced in January 2025, the AI Diffusion Rule aims to protect national security by restricting the export of advanced AI components to nations designated as “Tier Two” countries, including Switzerland, Poland, Greece, Singapore, India, and Israel, among others. These countries face quantitative limits on the import of American AI technology, raising concerns about supply shortages. Smith noted that customers in these countries now worry about “an insufficient supply of critical American AI technology,” potentially driving them to seek alternatives from Chinese AI suppliers.

The rule is part of a broader U.S. strategy to control the global distribution of AI technologies and prevent adversaries from acquiring advanced AI capabilities. Smith and other critics argue that it imposes centralized control over the global computing economy, restricting the reach of U.S. technology companies.

Implications for tech sector and geopolitics

Smith highlighted the potential impact on U.S. economic growth and global tech leadership, pointing to Microsoft’s $80 billion investment in AI infrastructure worldwide — more than half of which is dedicated to the U.S. He argued that expanding AI infrastructure in other countries is essential to provide low-latency services to local enterprises and consumers, warning that the current rule “discourages what should be regarded as an American economic opportunity — the export of world-leading chips and technology services.”

Smith urged the Trump administration to revise the rule, suggesting the removal of quantitative caps and the preservation of qualitative security standards and emphasized the importance of enabling American firms to compete globally.

“America’s AI race with China begins at home,” he wrote.

Other perspectives

Biden’s Undersecretary of Commerce for Industry and Security Alan Estevez defended the rule in January as preventing adversaries from acquiring advanced AI capabilities that could pose security threats. But after taking office just a few days later, President Trump signed Executive Order 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence,” which revoked some prior policies and established a plan to promote AI development.

The order mandates the creation of an action plan within 180 days to sustain U.S. AI leadership, focusing on human flourishing, economic competitiveness, and national security. Additionally, it requires the review of existing policies to identify and address actions that may conflict with these new policy goals. While the administration has not yet detailed specific changes to the AI Diffusion Rule, this executive order indicates a shift toward policies that prioritize U.S. AI innovation and leadership.

Learn more about the top AI companies most likely to be affected by this rule.

The post Microsoft’s Brad Smith Criticizes US AI Diffusion Rule: “Insufficient Supply” Will Drive Business to Chinese Vendors appeared first on eWEEK.

]]>
Sam Altman’s Former CTO Launches OpenAI Competitor, Thinking Machine Labs https://www.eweek.com/news/mira-murati-ai-startup-thinking-machines-lab/ Tue, 18 Feb 2025 22:53:00 +0000 https://www.eweek.com/?p=232434 The company aims to make AI systems more accessible, understandable, and customizable. Murati’s team includes tech leaders from OpenAI, Meta, Google, and Mistral.

The post Sam Altman’s Former CTO Launches OpenAI Competitor, Thinking Machine Labs appeared first on eWEEK.

]]>
OpenAI’s former Chief Technology Officer Mira Murati announced her new venture, Thinking Machines Lab, a San Francisco-based artificial intelligence startup focused on enhancing human-AI collaboration. The company said it aims to make AI systems more accessible, understandable, and customizable and has already attracted a team of approximately 30 top researchers and engineers — many recruited from other tech leaders, including OpenAI, Meta, and Mistral.

John Schulman, an OpenAI co-founder and a key figure behind the development of ChatGPT, will serve as Thinking Machines Lab’s chief scientist, while former OpenAI VP of Research Barret Zoph is the new Chief Technology Officer. Other significant additions include OpenAI alums Jonathan Lachman and Alexander Kirillov, as well as experts from Google, Meta, Mistral, and Character AI, reflecting a diverse pool of talent dedicated to advancing AI technology.

Public benefit corporation: AI for the common good?

Murati structured Thinking Machines Lab as a public benefit corporation to highlight its commitment to developing advanced AI that is both accessible and beneficial to the public. The company’s mission centers on three primary objectives: helping people adapt AI systems to meet specific needs, laying the foundation for more capable AI systems, and fostering open science to enhance our collective understanding of AI and related technologies.

Expressing a strong commitment to transparency, Murati said the company plans to regularly publish technical notes, papers, and share code to bridge the gap between rapid AI advancements and public understanding and help ensure that AI development progresses in a manner aligned with human values and safety.

OpenAI exodus feeds Thinking Machines Lab… and controversy

Murati briefly served as OpenAI’s interim CEO during a management dispute but left the company in September 2024. Her exit was part of a broader wave of company departures that has fragmented the AI landscape and fostered speculation about the future of competition. Thinking Machines Lab’s mission has also sparked interest as a controversy about OpenAI’s own mission is making news.

OpenAI cofounder Elon Musk claims the organization has strayed from its original mission of developing AI for the benefit of humanity and criticized it for transitioning from a nonprofit research lab into a for-profit entity with close ties to Microsoft, arguing that it contradicts OpenAI’s original commitment to open research and public benefit. In response, CEO Sam Altman defended the shift and countered that Musk himself had pushed for more control and financial gains when he was involved, making the conflict a high-profile clash over the future of artificial intelligence governance and corporate ethics.

Read about the other top companies redefining the AI landscape and developing new applications for the technology.

The post Sam Altman’s Former CTO Launches OpenAI Competitor, Thinking Machine Labs appeared first on eWEEK.

]]>
Elon Musk Offers to Buy OpenAI, Sam Altman Says “No, Thank You” https://www.eweek.com/news/elon-musk-offers-buy-openai-sam-altman-says-no-thank-you/ Tue, 11 Feb 2025 14:32:24 +0000 https://www.eweek.com/?p=232220 Elon Musk proposed a $97.4 billion bid to acquire control of OpenAI. In response, OpenAI CEO Sam Altman dismissed the offer.

The post Elon Musk Offers to Buy OpenAI, Sam Altman Says “No, Thank You” appeared first on eWEEK.

]]>
In a month when he’s already dominating world headlines, Elon Musk proposed a $97.4 billion bid to acquire control of artificial intelligence research company OpenAI. Reported by The Wall Street Journal, Musk and a consortium of investors want to steer the company back to its foundational mission of developing AI in a safe and transparent manner.

Back to basics: OpenAI’s nonprofit roots

Musk was one of the cofounders of OpenAI, best known for its revolutionary AI chatbot ChatGPT. He left the organization in 2019, blaming strategic disagreements with CEO Sam Altman. He’s remained an outspoken critic of OpenAI’s shift toward a for-profit model and its deepening partnership with Microsoft and has been vocal about his belief that these moves deviate from the company’s original mission.

The proposed acquisition is backed by prominent venture capital firms and Musk’s own AI company, xAI, which created the Grok chatbot to compete with ChatGPT. The consortium’s stated objective is to realign OpenAI with its initial vision of open-source research and broad accessibility of AI technologies.

Altman comes out swinging

In response to the bid, Altman dismissed the offer and joked that OpenAI should instead consider purchasing Twitter for $9.74 billion, a nod to Musk’s acquisition of the social media platform — now called X — for considerably more money. Altman has led OpenAI’s transformation into a for-profit entity, arguing that this structure is necessary to attract the substantial capital required for advanced AI research.

He has also announced ambitious projects, including collaborating with President Trump and the U.S. government on the $500 billion Stargate AI infrastructure initiative aimed at bolstering the country’s AI capabilities. Musk mocked that initiative publicly, claiming that Altman and the other partners did not have sufficient funding.

Musk’s bid is the latest development in an ongoing beef between the two tech titans. He has previously filed lawsuits alleging OpenAI deviated from its original mission and breached agreements. OpenAI has countered these claims and said Musk’s actions would undermine the organization to benefit his own AI ventures.

Industry observers are monitoring the effect of this ongoing power struggle on the trajectory of AI research and its applications. The outcome could have far-reaching implications — not only for OpenAI but for the broader AI landscape — influencing how future technologies are developed, funded, and governed.

Learn more about the top companies defining the AI space and developing new applications for the technology.

The post Elon Musk Offers to Buy OpenAI, Sam Altman Says “No, Thank You” appeared first on eWEEK.

]]>
AI Vendors Pushed to Disclose Impact on Natural Resources https://www.eweek.com/news/ai-vendors-pushed-to-disclose-impact-on-natural-resources/ Mon, 26 Aug 2024 17:15:02 +0000 https://www.eweek.com/?p=227478 Researchers are pushing tech companies for more transparency about the impact of their AI products on the environment as new reporting shows just how significant it’s likely to be. The increased use of artificial intelligence is already leading to more carbon emissions and higher demand on electricity and water. A growing number of voices are […]

The post AI Vendors Pushed to Disclose Impact on Natural Resources appeared first on eWEEK.

]]>
Researchers are pushing tech companies for more transparency about the impact of their AI products on the environment as new reporting shows just how significant it’s likely to be. The increased use of artificial intelligence is already leading to more carbon emissions and higher demand on electricity and water. A growing number of voices are calling on AI developers to share that information with customers.

A search query on OpenAI‘s ChatGPT uses as much as 10 times the power consumption of a standard Google search, while the AI chatbot requires the equivalent of 16 ounces of water to process 10 queries, according to Shaolei Ren, associate professor of electrical and computer engineering at UC Riverside. These levels of consumption cast the industry’s sustainability into question.

As access to water becomes more difficult in some areas of the U.S. and other countries, governments are facing shortages. AI will only make those worse. The specialized computer chips they require are power-intensive and need water to cool them. The increased demand caused by AI and the data centers that power the technology could also slow the transition to green energy and drive up consumer electric bills and make blackouts a more common occurrence. 

It’s estimated that global AI-related electricity consumption could rise by 64 percent by 2027, using as much energy as Sweden or the Netherlands. Amazon’s AWS cloud subsidiary recently purchased a nuclear powered data center in Pennsylvania in order to co-locate its growing AI data center next to the nuclear power plant that fuels it, and other vendors are also looking for new power sources to supplement or replace fossil fuels. 

But Ren and his colleague Alex de Vries—whose company Digiconomist has a mission of exposing the digital world’s unintended consequences—want to give users more agency in the decision. In a recent L.A. Times article, they told reporter Melody Petersen they are joining a growing number of other experts calling on tech companies to disclose the power and water usage of queries to their customers. 

As more and more companies incorporate AI into their products—often without having a clear view of the benefits—the demand for power and water and the drain on natural resources will only worsen. Because AI developers “tend to be secretive about their energy usage and their water consumption,” Ren said, he wants them to be more direct in disclosing the ramifications of using their tools to let them make more informed decisions.

The post AI Vendors Pushed to Disclose Impact on Natural Resources appeared first on eWEEK.

]]>
Viral Deepfake Videos Driving Increased Fear Over AI https://www.eweek.com/news/viral-deepfake-videos-driving-increase-fear-over-ai/ Wed, 14 Aug 2024 17:40:39 +0000 https://www.eweek.com/?p=227115 Viral footage of software being used to create AI-generated deepfakes in real time over live webcam feeds has given rise to new fears about the potential use of this artificial intelligence (AI) technology for everything from financial fraud to election interference. The Deep-Live-Cam software can extract a face from a single photo and apply it […]

The post Viral Deepfake Videos Driving Increased Fear Over AI appeared first on eWEEK.

]]>
Viral footage of software being used to create AI-generated deepfakes in real time over live webcam feeds has given rise to new fears about the potential use of this artificial intelligence (AI) technology for everything from financial fraud to election interference. The Deep-Live-Cam software can extract a face from a single photo and apply it to another person on a webcam feed live. 

Videos showing imitations of Republican Vice Presidential candidate J.D. Vance, Elon Musk, Mark Zuckerberg, and actors George Clooney and Hugh Grant in real time have driven interest in the open source software. They also demonstrate the ease with which deep fakes can be created thanks to developments in the technology.

Risks of AI Deepfake Technology

According to Ars Technica, deepfakes have already led to several successful high profile incidents of fraud. In one, someone stole more than $25 million dollars from a Hong Kong company after impersonating its CFO on a video call. 

The Deep-Live-Cam software compiles multiple software packages in a single interface that detects faces in the source and target images and uses the “inswapper” and GFPGAN AI models to swap faces and enhance the footage. Though it’s not yet ready for widespread use, and is not as easy as plug-and-play installation, the software package puts sophisticated technology in the hands of a larger group of users.

The nature of opens source AI software means it is likely to continue to improve as more people use and improve upon it.

Learn more about how to prevent AI-based identity fraud in our eWeek video interview with Ping Identity’s Patrick Harding.

The post Viral Deepfake Videos Driving Increased Fear Over AI appeared first on eWEEK.

]]>