Brittany Brooks, Author at eWEEK https://www.eweek.com/author/bbrooks/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Mon, 03 Mar 2025 19:28:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 MWC 2025: Lenovo’s Game-Changing Innovations Include AI-Powered Foldables https://www.eweek.com/news/lenovo-ai-mwc-2025/ Mon, 03 Mar 2025 19:28:08 +0000 https://www.eweek.com/?p=232722 Lenovo’s ambitious tech at MWC 2025 reveals how the future of computing is evolving with flexibility and sustainability in mind.

The post MWC 2025: Lenovo’s Game-Changing Innovations Include AI-Powered Foldables appeared first on eWEEK.

]]>
The Mobile World Congress (MWC) 2025 in Barcelona has become a stage for the most cutting-edge innovations in the tech industry, and Lenovo is at the forefront of the revolution. With a strong emphasis on AI-driven computing, sustainability, and modular adaptability, the company has introduced a lineup of devices that redefine how we interact with technology.

This year, Lenovo’s showcase isn’t just about incremental upgrades – it’s about bold concepts and game-changing designs that push the boundaries of what’s possible. From a laptop that folds into multiple modes to a device powered entirely by solar energy, Lenovo is charting a course toward a more intelligent, adaptable, and eco-conscious future.

Here’s a closer look at Lenovo’s most exciting innovations from MWC 2025.

Business laptops get more flexible and have AI enhancements

The ThinkPad T-series has long been a business computing staple known for its durability and performance. At MWC 2025, Lenovo is introducing a convertible model with a 360-degree hinge that transitions between laptop and tablet modes. Designed for hybrid professionals, this shift reflects a broader move toward versatile work devices.

The convertible ThinkPad integrates AI-enhanced power management and security features, ensuring optimized efficiency without extra complexity. While convertibles have existed for years, their introduction to the ThinkPad T-series marks a significant evolution, showing that business laptops are adapting to modern workflows that call for mobility and flexibility.

ThinkBook concept that expands screen space

Lenovo is also exploring how screen technology can evolve. Its ThinkBook “Codename Flip” AI PC Concept features an outward-folding 18.1-inch OLED display. It functions as a 13-inch laptop when compact, but when unfolded, it becomes a large-screen workstation. Beyond its flexible form, this concept integrates AI-powered multitasking features, such as adaptive workspace management and real-time collaboration tools. By combining hardware innovation with AI, Lenovo is experimenting with how devices can dynamically adjust to users’ needs in size and functionality.

A step toward solar-powered laptops

The Yoga Solar PC Concept introduces a high-efficiency solar panel system that generates power even in low-light conditions. Lenovo claims that 20 minutes of direct sunlight can provide an hour of video playback, a significant step toward reducing reliance on conventional charging methods.

Though still in the concept phase, this development reflects an industry-wide push for energy-efficient devices. If refined further, solar-powered computing could move beyond niche use cases and become a practical solution for mobile users looking to reduce their environmental impact.

Lenovo’s bold vision for the future

While not all of these innovations will make it to mass production in their current form, they reflect where the industry is headed. Whether it’s a business laptop that doubles as a tablet, a foldable display that expands a workspace, or a device powered by the sun, these developments suggest that the way we think about laptops and personal computing could be radically different in the near future.

The post MWC 2025: Lenovo’s Game-Changing Innovations Include AI-Powered Foldables appeared first on eWEEK.

]]>
1,000+ Musicians vs AI: Silent Album Protesting UK Government Speaks Volumes https://www.eweek.com/news/musicians-silent-album-ai-protest-uk-government/ Wed, 26 Feb 2025 00:02:40 +0000 https://www.eweek.com/?p=232600 Musicians are fighting back against AI’s unchecked use of their work by releasing an album with no sound. Will this silent protest make noise with the U.K. government in particular?

The post 1,000+ Musicians vs AI: Silent Album Protesting UK Government Speaks Volumes appeared first on eWEEK.

]]>
More than 1,000 musicians and groups including Kate Bush, Damon Albarn, Annie Lennox, Billy Ocean, and Jamiroquai released a silent album on Tuesday. The unusual release is a direct protest against the U.K. government’s consideration of changes to copyright laws that could give AI companies free rein to train their AI models on music available online. The proposed changes would also allow AI developers to use copyrighted material without needing the artist’s consent, a shift that could reshape the music industry and not in favor of human creators.

When the song titles on the silent album titled “Is This What We Want?” are combined, they read, “The British Government Must Not Legalise Music Theft To Benefit AI Companies.”

Why this silent album matters

If the U.K. government greenlights these proposals, AI companies would be able to scrape publicly available music and repurpose it for training AI models. For musicians, this isn’t just a legal gray area – it’s a potential disaster. Many artists rely on royalties and licensing agreements to sustain their careers. Allowing AI to absorb and replicate their work without permission could devalue their artistry, disrupt their income, and blur the lines between human- and machine-made music.

AI is already using some celebrities’ voices

This isn’t a hypothetical problem – AI is already capable of cloning celebrity voices with unsettling accuracy. From viral deepfake music tracks to entire AI-generated albums mimicking real artists, the technology is advancing at an alarming pace. Listeners are reaching a point where they can’t tell the difference between an AI-generated song and an original track.

For musicians, this raises existential questions: If AI can reproduce their sound, what will happen to their identity as an artist? And if AI-generated music becomes indistinguishable from human-made songs, how will the industry define authenticity?

AI’s threat to creativity

The silent album protest underscores a more significant issue: AI’s growing role in the creative arts. While AI can be a valuable tool for enhancing production, its ability to autonomously create music, art, and even literature raises concerns about originality and ownership. If AI is trained on human-made music without restrictions, it could saturate the industry with machine-generated songs, reducing opportunities for real artists and diminishing the value of human creativity. Artists are sending a clear message with this protest that says they won’t sit idly by while their work is harvested by AI. However, whether lawmakers will listen remains the real question.

The post 1,000+ Musicians vs AI: Silent Album Protesting UK Government Speaks Volumes appeared first on eWEEK.

]]>
Mark Zuckerberg Bets Big That Real Humans Want Humanoid Robots: Meta’s Latest AI Push https://www.eweek.com/news/meta-humanoid-robotics-ai-expansion/ Tue, 18 Feb 2025 19:41:56 +0000 https://www.eweek.com/?p=232424 Meta is expanding its AI research beyond chatbots to develop humanoid robotics, focusing on AI-driven software and sensors that could power future consumer robots.

The post Mark Zuckerberg Bets Big That Real Humans Want Humanoid Robots: Meta’s Latest AI Push appeared first on eWEEK.

]]>
Meta is setting its sights on a bold new frontier: AI-powered humanoid robots. While the tech giant has been deeply invested in AI for years, its latest focus is integrating robotics into everyday life. The company’s research and development efforts aim to push the boundaries of consumer humanoid robotics, leveraging its Llama AI platform to create intelligent machines that can navigate and interact with the physical world.

AI that thinks and moves

For years, AI breakthroughs have revolutionized chatbots, but making robots understand and function in the physical world is an entirely different challenge. Meta has been working on “embodied AI,” a concept that blends intelligence with real-world interactions.

Unlike traditional AI models that exist solely in digital spaces, embodied AI is designed to move, sense, and make decisions in a three-dimensional environment. This is a crucial step toward making humanoid robots practical for everyday use.

To bring this vision to life, Meta has established a dedicated unit within its Reality Labs division. Rather than building its own physical robots, this unit is focused on AI-powered software and sensor technology that could enhance third-party robotics, moving closer to the long-standing goal of AI assistants that can clean, cook and organize.

Meta’s strategic play

While Meta’s vision for AI-driven humanoids is ambitious, the company isn’t planning to roll out its own branded robots just yet. Instead, it’s focusing on AI-driven sensors and software that can power robots made and sold by other manufacturers, like Unitree Robotics and Figure AI. This strategy positions Meta alongside Tesla, Apple, and Google, all of which are making moves in the robotics space.

Despite a $5 billion financial loss in its Reality Labs division last year, Meta sees this investment as a strategic step toward future growth, tapping into its existing expertise in AI, virtual reality (VR), and augmented reality (AR) to gain an edge in robotics. Meta’s work in AR and VR through their Quest headset and Ray-Ban Meta smart glasses has already laid the groundwork. The sensors and computing technology developed for AR applications could play a key role in future robotics hardware, potentially accelerating the path to consumer-ready humanoid robots.

Preparing for the consumer market

Meta is not alone in developing consumer robotics technology. Google, Apple, and Tesla are all exploring how AI-powered robots can integrate into everyday life. While much of the focus has been on robots entering the workforce, the consumer market is rapidly becoming the next big opportunity. However, widespread adoption of AI-powered humanoid robots is still years away, as challenges in hardware, cost, and AI safety remain significant barriers.

The post Mark Zuckerberg Bets Big That Real Humans Want Humanoid Robots: Meta’s Latest AI Push appeared first on eWEEK.

]]>
The Future of Recruiting: More AI and Human Connection https://www.eweek.com/news/linkedin-future-recruiting-ai/ Fri, 14 Feb 2025 19:47:22 +0000 https://www.eweek.com/?p=232336 The future of recruiting isn’t AI vs. humans – it’s AI and humans. See why human skills are more valuable than ever in an AI-powered world.

The post The Future of Recruiting: More AI and Human Connection appeared first on eWEEK.

]]>
Artificial intelligence has been reshaping industries, and recruitment is no exception. With AI handling time-consuming tasks, companies are seeing increased efficiency in their hiring processes.

AI integration into recruiting has surged to 53%, signaling a major shift in how organizations approach hiring. As AI takes on more responsibility, recruiters and talent acquisition (TA) professionals are realizing their AI skills are as important as their human skills.

AI has been great at job advertising, filtering resumes, scheduling interviews, and assessing candidates’ qualifications. But it can’t replicate emotional intelligence or the adaptability to the nuances of human interaction. LinkedIn’s 2025 Future of Recruiting Report suggests the future of AI and recruitment hinges on relationship-building, communication, and reasoning.

Rethinking recruitment moving forward

With AI handling the legwork, recruiters must focus on what technology cannot – creating meaningful connections. Establishing an initial connection begins with the candidate experience. A candidate’s experience can make or break an organization’s ability to attract and hire top talent, yet it’s often overlooked. In today’s competitive job market, a generic hiring process won’t cut it.

The perfect candidate might slip through the cracks if the hiring process feels robotic or impersonal. Rather than spending time sifting through applications, hiring teams have the unique opportunity to redefine their role. Now, they can focus on understanding candidates’ motivations, career aspirations, and whether they truly fit the company’s culture. A hiring process that prioritizes communication, personalization, and genuine interaction will stand out.

The pressure for quality hires is heating up

Another shift taking place? The demand for high-quality hires. A staggering 89% of TA professionals agree that measuring the quality of hire is more critical than ever. In the post-pandemic hiring frenzy, many companies prioritized quantity over quality. This led to mass layoffs, quiet firing, and an overall misalignment between employees, their roles, and the company.

Organizations have changed course and are looking to hire the right people, not just more people. However, the quality of hire metric is tricky to quantify. That’s where AI steps in again. With 61% of TA professionals believing AI can help measure quality of hire, AI-driven analytics are being used to track employee performance, culture fit, and long-term retention rates. These data-driven insights provide a clearer picture of how new hires contribute over time.

AI and recruiting: A powerful partnership

The future of hiring won’t depend on speed and efficiency alone. Making meaningful and genuine connections in a way that only people can will play a vital role in recruitment. As AI evolves, the most successful recruiters will be those who balance data with emotional intelligence, using AI as a tool rather than a replacement.

The post The Future of Recruiting: More AI and Human Connection appeared first on eWEEK.

]]>
Is AI Helping or Hurting Critical Thought? https://www.eweek.com/news/ai-critical-thinking-impact/ Wed, 12 Feb 2025 19:40:56 +0000 https://www.eweek.com/?p=232258 AI is shaping the way we think – but is it for better or worse? Discover what Microsoft’s new study reveals about AI’s impact on critical thinking.

The post Is AI Helping or Hurting Critical Thought? appeared first on eWEEK.

]]>
In an era where generative AI (GenAI) is reshaping how we work, learn, and create, a new study raises an important question: Is AI enhancing our critical thinking skills or quietly eroding them?

A fascinating paradox has emerged — participants with greater confidence in AI were less likely to engage in critical thinking, whereas those with higher self-confidence were more inclined to analyze and question AI-generated outputs. The contrast suggests that blind faith in AI might make us less analytical, while skepticism encourages deeper thought.

However, the study revealed even more interesting information. The nature of critical thinking itself is shifting. Instead of gathering information and strategizing solutions, participants spent more time verifying GenAI’s responses and overseeing the tasks AI executes. In other words, users are becoming AI supervisors rather than problem-solvers.

These insights come from Microsoft’s research, The Impact of Generative AI on Critical Thinking.

Digital Amnesia

The growing reliance on AI isn’t just changing how we think — it’s also impacting what we remember. Digital amnesia — the tendency to forget information that we assume technology will store — has been observed in various studies, highlighting the cognitive trade-offs of AI reliance. Phone numbers, addresses, and even general knowledge aren’t committed to our memory because, within a few clicks, we have the information.

Dr. Michael Gerlich, a professor at the Swiss Business School, best summarizes the situation: “While [AI] enhances efficiency and convenience, it inadvertently fosters dependence, which can compromise critical thinking skills over time.” AI’s convenience is undeniable, but the trade-off is a decline in deep analysis and independent reasoning. If we’re not careful, we risk becoming passive consumers of AI-generated content rather than active thinkers.

Why critical thinking still matters

Let’s not forget AI doesn’t think. It recognizes patterns, hence the term “machine learning.” It generates results based on patterns and training data, which means it can make mistakes, reinforce biases, and hallucinate utterly false information. That’s why critical thinking is more essential than ever. Human intervention is necessary to spot inaccuracies, challenge biases, and fill gaps where AI falls short.

Staying sharp in an AI World

Dr. Gerlich stresses that AI tools should be used correctly. Instead of letting AI take over entire tasks, use it to engage in critical discussions. One way to start is to refine your questions. Instead of accepting the first AI-generated answer, rephrase and challenge it. Crafting precise and thoughtful questions strengthens analytical skills.

Additionally, incorporating active learning methods, such as argument analysis, problem-based learning, and reflective exercises, helps prevent critical thinking atrophy. As AI models evolve, we must ensure our cognitive skills evolve alongside them. The purpose of AI tools is to make us better at what we do, not think for us. But, like any tool, its value depends on how we use it.

The post Is AI Helping or Hurting Critical Thought? appeared first on eWEEK.

]]>
Another OpenAI Researcher Quits, Calls AI “Terrifying” and a “Risky Gamble” https://www.eweek.com/news/open-ai-researcher-quits-calls-ai-terrifying/ Wed, 29 Jan 2025 19:09:10 +0000 https://www.eweek.com/?p=232084 OpenAI has experienced a series of abrupt resignations among its leadership and key personnel since November 2023. From co-founders Ilya Sutskever and John Schulman to Jan Leike, the former head of the company’s “Super Alignment” team, the exits keep piling up. But that’s not all—former safety researcher Steven Adler and Senior Advisor for AGI Preparedness […]

The post Another OpenAI Researcher Quits, Calls AI “Terrifying” and a “Risky Gamble” appeared first on eWEEK.

]]>
OpenAI has experienced a series of abrupt resignations among its leadership and key personnel since November 2023. From co-founders Ilya Sutskever and John Schulman to Jan Leike, the former head of the company’s “Super Alignment” team, the exits keep piling up. But that’s not all—former safety researcher Steven Adler and Senior Advisor for AGI Preparedness Miles Brundage have also left. These are the notable names. Many other employees have chosen to depart. 

What’s making alarm bells ring louder is the reason behind their voluntary separation. One common thread tying these departures together is the fear that OpenAI is prioritizing profit over society’s safety. In the high-stakes world of AI, that’s a red flag that’s impossible to ignore.

Why Are Employees Leaving OpenAI?

AI safety and governance have been gaining more attention, and for good reason. AI models are getting smarter, and companies are racing to develop artificial general intelligence (AGI). With AI’s accelerated development, AGI is on track to becoming a reality. However, many former OpenAI employees feel the company is more focused on rapid innovation and product launches than adequately addressing the risks of AGI. Leike has been particularly vocal about this issue. 

“We’re long overdue in getting serious about the implications of AGI,” he posted on X, criticizing the company for putting AI safety on the back burner.

Why Worry About AGI?

AGI is a superintelligent AI model that can autonomously think, learn, reason, and adapt across various domains. It can perform any intellectual task like a human. Unlike today’s AI, which is designed for specific tasks, AGI can self-improve and potentially exceed human intelligence. This might sound like a sci-fi movie, but scientists in China have already developed an AI model that can self-replicate without human intervention. In a test simulation, the AI sensed an impending shutdown and replicated itself for survival. At this point, it’s not just advanced technology. It’s survival instincts in action.

It’s this type of capability that’s keeping AI safety advocates awake at night. If AGI’s goals aren’t aligned with human values and well-being, the ramifications could be catastrophic. Imagine an AI optimizing for efficiency and deciding that humans are the bottleneck. Can we trust that AI has our best interests at heart?

AI Governance and Safety

AI safety is a non-negotiable factor. Without strict governance and safety measures, AGI could become unpredictable, dangerous, and uncontrollable. The European Union, China, and the United States are working on AI laws and policies. Companies like IBM, Salesforce, and Google have pledged to build AI ethically. These are positive steps, but it’s clear we’re still playing catch-up.

The post Another OpenAI Researcher Quits, Calls AI “Terrifying” and a “Risky Gamble” appeared first on eWEEK.

]]>
Can AI Pass Humanity’s Ultimate Intelligence Test? https://www.eweek.com/news/can-ai-pass-ultimate-iq-test/ Fri, 24 Jan 2025 15:01:50 +0000 https://www.eweek.com/?p=232030 Can AI pass Humanity’s Last Exam? Discover the bold benchmark redefining artificial intelligence and its potential.

The post Can AI Pass Humanity’s Ultimate Intelligence Test? appeared first on eWEEK.

]]>
A groundbreaking AI benchmark called “Humanity’s Last Exam” is sending ripples through the AI community. Developed by the Center for AI Safety (CAIS) in partnership with Scale AI, it aims to be the ultimate test of whether AI can achieve human-like reasoning, creativity, and problem-solving. These traits separate true intelligence from mere mimicry.

Humanity’s Last Exam is designed to push the boundaries of what AI can do. It’s a benchmark that challenges AI systems to demonstrate capabilities far beyond traditional tasks, setting a new standard for evaluating AI.

An AI Benchmark Unlike Any Other

Humanity’s Last Exam isn’t about measuring raw computational ability or accuracy in tasks like summarizing articles or identifying images. Instead, it assesses general intelligence and ethical reasoning. The benchmark challenges AI to tackle questions in math, science, and logic while addressing moral dilemmas and the implications of emerging technologies.

“We wanted problems that would test the capabilities of the models at the frontier of human knowledge and reasoning,” explained CAIS co-founder and executive director Dan Hendrycks.

A standout feature of the benchmark is the incorporation of “open world” challenges, where problems lack a single correct answer. For example, AI might analyze hypothetical situations that weigh out ethical considerations and predict long-term consequences. This ambitious test pushes AI to demonstrate contextual understanding and judgment.

Is AI Getting Too Smart?

Critics question whether Humanity’s Last Exam overemphasizes human-like traits, sparking debates about its practicality and feeding fears of AI one day surpassing human intelligence. However, its supporters argue that benchmarks like this one are essential for exploring the true capabilities of AI and revealing its limitations. By pushing boundaries, this test offers a crucial glimpse into the future of AI, one that’s fascinating and, for some, a little unsettling. Leaving the question: Is this the key to understanding AI, or are we venturing into territory we’re not ready to face?

What Lies Ahead

The initial trials have already begun, with major players like OpenAI, Anthropic, and Google Deepmind participating. So far, OpenAI’s GPT-4 and GPT-o1 models are leading the pack, but none of the AI models have cracked the 50 percent mark… yet. Hendrycks suspects that the AI models’ scores could rise above that by the end of this year. Whether Humanity’s Last Exam will prove to be an insurmountable challenge or the beginning of a new era in artificial general intelligence remains an open question.

Read our reviews of Grok, ChatGPT, and Gemini and judge their intelligence for yourself.

The post Can AI Pass Humanity’s Ultimate Intelligence Test? appeared first on eWEEK.

]]>
78% of Executives Plan to Invest More in AI https://www.eweek.com/news/execs-plan-to-invest-more-in-ai/ Thu, 23 Jan 2025 16:55:44 +0000 https://www.eweek.com/?p=231994 AI is shaping the future of business. See how tailored strategies, strong leadership, and collaboration deliver sustainable innovation and value.

The post 78% of Executives Plan to Invest More in AI appeared first on eWEEK.

]]>
AI was met with excitement and hesitation when it first entered the business world. Leaders were unsure how to integrate this groundbreaking technology into their operations. High implementation costs, inconsistent results, and a lack of understanding held many businesses back. Initially, AI’s potential felt out of reach, leaving companies to wonder if it could ever live up to the hype.

Today, the narrative has completely shifted. AI is now a driving force for innovation and growth. According to a Deloitte report, 74 percent of executives said that their generative AI initiatives are meeting or exceeding ROI expectations, while 78 percent plan to increase AI spending in the coming fiscal year. The technology has matured, and so have the practices for adopting it.

Enter The Chief AI Officer

The rise of the Chief AI Officer (CAIO) signals a shift in how businesses approach AI. Once considered the domain of IT or data teams, AI now demands strategic oversight at the highest level. The CAIO’s job is to align AI with business objectives, ensure its ethical use, and accelerate innovation.

“The Chief AI Officer can develop efficiencies within the organization,” according to former Dell CAIO Jeff Boudreau, “bringing greater productivity for team members and a better experience for their end customer.” This trend extends beyond the private sector. Approximately two-thirds of U.S. federal agencies have a designated CAIO, underscoring AI’s critical role in both public and private sectors.

Incorporating People into Your AI Strategy

AI’s capabilities may be transformative, but its success depends on people. Companies that excel at AI adoption recognize the importance of collaboration between technology and human insight. Managers translate the organization’s AI vision into actionable goals, ensuring their teams are equipped to execute them. Employees are also integral to the process, participating in pilot projects and providing feedback to refine AI solutions. This collaborative approach builds trust and ensures AI enhances workflows and delivers measurable value. The Deloitte findings support these best practices.

The Best AI Strategy Depends on The Company

No two companies are the same, and neither are their AI strategies. The best approaches are tailored to each company’s unique goals, challenges, and cultures. For example, a retail company might focus on AI for inventory optimization, while a healthcare provider may prioritize patient diagnostics or data security.

Businesses must assess their resources, technological infrastructure, and employees’ skills to determine what works best. Aligning AI initiatives with industry regulations and ethics also creates a strategy that maximizes ROI, enhances operations, and complements the overall vision. With strong leadership and a tailored approach, AI can deliver sustainable, long-term value.

See what the top AI companies are doing to shape this dynamic technology and the applications it’s used for.

The post 78% of Executives Plan to Invest More in AI appeared first on eWEEK.

]]>
Use of Humanoid Robots to Increase by 61% https://www.eweek.com/news/ai-robots-to-increase-in-use/ Tue, 21 Jan 2025 13:56:45 +0000 https://www.eweek.com/?p=231957 Robots are no longer fiction. See how AI is driving innovation at CES 2025. Learn about their impact on everyday life now!

The post Use of Humanoid Robots to Increase by 61% appeared first on eWEEK.

]]>
Long a staple of science fiction, humanoid robots are no longer just figments of imagination—the future came sharply into focus at the 2025 Consumer Electronics Show (CES) as robots of all kinds took center stage. From bartenders mixing cocktails and cleaners tidying up spaces to factory robots collaborating with employees, robots are increasingly able to perform sophisticated human tasks as companies like NVIDIA, Boston Dynamics, and Tesla incorporate AI technology to increase and enhance their capabilities.

AI is helping robots see, learn, and adapt to dynamic environments. It’s no exaggeration to say we’re at the dawn of a robotic revolution. By 2050, the implementation of humanoid robots is projected to increase by 61 percent, with more than 648 million in operation. These robots will be smarter, faster, and more intuitive, but challenges remain before they can fully integrate into everyday life.

What’s Holding AI Robotics Back?

Current robots struggle to truly engage with their surroundings. For instance, processing visual information in real-time is a significant hurdle for simple tasks like picking up a fallen object. Humans can quickly react to the object, but robots face latency issues due to cloud computing dependencies. AI robots are further challenged by unexpected changes in their environments and have difficulty interpreting subtle cues of human behavior, such as reading body language or understanding social norms.

These limitations slow down their ability to interact effectively. The solution? AI world models. AI world models empower robots to process information in real-time, react quickly to changes, and even learn like humans. By incorporating these advancements, robots can work faster and better understand our world, paving the way for a future where they seamlessly interact with people and their surroundings.

The Fear of Robots Replacing Human Workers

The rise of AI robotics sparks a debate: will robots replace human workers? Industries like banking have already felt the impact of AI, with predictions of 200,000 positions being eliminated over the next three to five years. The implementation of humanoid robots could have the same effect. While the fear of job displacement is valid, experts argue that AI robots could improve workers’ lives rather than replace them.

By taking over repetitive tasks, robots can free up employees for more creative and meaningful roles. Additionally, the limitations of current robotics make the human element indispensable. The age of humanoid robots is here, and it’s only getting more dynamic. While hurdles remain, the integration of robots promises to revolutionize industries and redefine how we work and live.

Read about the top AI-proof jobs to protect your career from the coming surge of chatbots, agents, and robots, or explore AI jobs to see what they require and how much they pay.

The post Use of Humanoid Robots to Increase by 61% appeared first on eWEEK.

]]>
AI “Brad Pitt” Cons Woman Out of $800,000 https://www.eweek.com/news/ai-brad-pitt-scams-woman/ Thu, 16 Jan 2025 16:05:07 +0000 https://www.eweek.com/?p=231897 Discover the harrowing tale of a 53-year-old French woman who lost $800,000 in a sophisticated AI scam, believing she was in love with Brad Pitt.

The post AI “Brad Pitt” Cons Woman Out of $800,000 appeared first on eWEEK.

]]>
A 53-year-old French woman lost $800,000 in a heart-wrenching scam after believing she was in a romantic relationship with Hollywood star Brad Pitt. Not just any scam, this was a sophisticated AI-powered hoax involving deepfake images, fake messages, and fabricated stories. The ordeal began when the woman created her first Instagram account while on a ski trip with her family and received a message from someone claiming to be Brad Pitt’s mother, Jane Etta Pitt.

The message simply said that the woman was the person her son should be with. Flattered by the unexpected attention, she responded, unaware she was stepping into a carefully crafted trap.

Over time, the AI scammer impersonating Brad Pitt sent her poems and declarations of love. He began asking for money, claiming ex-wife Angelina Jolie froze his bank account and later claimed he was dying of kidney cancer, getting a fake doctor to confirm his story.

Feeling compelled to help, the woman wired $800,000 to a Turkish bank account, convinced she was saving the life of her “true love.”

How AI is Making Scams More Convincing

Not that long ago, you could avoid a scam by hanging up on the scammer. Not anymore. The Brad Pitt AI scam underscores how advanced technology is making scams more convincing. AI scammers exploit emotional vulnerabilities by learning a victim’s likes, dislikes, and triggers. For this woman, the scammer created a deeply personal bond, ultimately leading her to make life-altering decisions, like agreeing to marry him.

Moved by the scammer popping the big question, she divorced her millionaire husband despite several warnings from her daughter. Besides catering to a person’s need to be loved, AI scammers will use AI to generate audio clips, videos, or pictures of loved ones in trouble to demand urgent financial assistance.

AI Scams to Look Out for in 2025

Here are some of the top AI scams to keep an eye out for in 2025:

  • AI Chatbot Scams: A scammer will create a fake online chat and pose as a customer service agent or company employee to ask for personal information.
  • AI Deepfake Scams: Like the Brad Pitt impersonation, scammers use AI to create highly realistic videos of public figures or loved ones. 
  • AI Voice Cloning Scams: This technology empowers scammers to mimic voices in real time, making victims believe they’re speaking to someone they trust.

Before trusting any claims, whether from a celebrity or a loved one, verify their identity through trusted channels. If a family member claims they’re in danger, reach out directly or confirm their safety with someone close to them. Always question the authenticity of urgent requests, especially when they involve financial transactions. Remember, if it feels too good or too urgent to be true, it probably is.

Learn more about the challenges and solutions around AI and privacy to keep yourself safer online.

The post AI “Brad Pitt” Cons Woman Out of $800,000 appeared first on eWEEK.

]]>