Drew Robb, Author at eWEEK https://www.eweek.com/author/drew-robb/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Thu, 12 Dec 2024 16:28:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 New Study Identifies Major Roadblocks to AI Adoption for Businesses — Are You Aware of Them? https://www.eweek.com/news/ai-adoption-roadblocks-businesses-face/ Mon, 09 Dec 2024 15:12:54 +0000 https://www.eweek.com/?p=230773 Discover the latest research uncovering critical barriers to AI adoption for enterprises of all sizes.

The post New Study Identifies Major Roadblocks to AI Adoption for Businesses — Are You Aware of Them? appeared first on eWEEK.

]]>
A study done by automation vendor Hyperscience and the Harris Poll found that four out of five organizations are currently using AI, and almost all plan to increase usage in areas such as data analysis, cybersecurity, and predictive analysis. However, it also found significant AI adoption challenges.

Lack of Use of Proprietary Data

The success of ChatGPT and other generative AI (GenAI) engines has led many to neglect the most obvious usage of AI in the enterprise: gleaning further insight from existing data. The study Unlocking GenAI: Navigating the Path from Promise to ROI found that three out of five organizations are doing a poor job of leveraging their existing data estates. This particularly applies to proprietary data.

For example, most organizations use large language models (LLMs) like those owned by OpenAI, Google, Microsoft, and others. Relatively few have developed their own small language models (SLMs), yet the report found that three out of four organizations using GenAI have noticed that SLMs outperform LLMs in speed, cost, ROI, and accuracy. SLMs can be finely tuned to organizational needs, enabling them to understand context, jargon, and nuances relevant to particular industries. This specialization leads to faster processing times and improved outcomes, allowing businesses to automate repetitive tasks such as data extraction, categorization, and summarization.

“Data is the lifeblood of any AI initiative, and the success of these projects hinges on the quality of the data that feeds the models,” said Hyperscience CEO Andrew Joiner. “Three out of five decision-makers report their lack of understanding of their own data inhibits their ability to utilize GenAI to its maximum potential.”

Many organizations store a wide range of documents, including PDFs, blogs, customer files, database entries, web forms, application forms, orders, invoices, and more. This seems like an area ripe for AI, yet half of organizations don’t use GenAI to help with document processing or streamlining workflows.

This is one of the most straightforward implementations of AI in the enterprise, as the data is right there and is typically contained within the firewall. “GenAI can transform document processing and enhance operational efficiency,” Joiner said.

Other Barriers to Adoption

Whether LLMs or SLMs are used, the key is how the model is trained. This remains a big AI adoption roadblock. Around 77 percent of survey respondents admitted to underusing available data for training AI. As a result, model accuracy suffers, hallucinations become more frequent, and trust in AI diminishes.

Data privacy can also inhibit GenAI adoption. The Harris Poll noted that 83 percent of organizations express ethical and data privacy concerns about AI. Interestingly, according to the survey, small businesses are more likely to recognize and act on such concerns than large enterprises.

Integration is a major barrier to realizing AI’s potential benefits. It is one thing to ask a model for answers. It is quite another to integrate the business’s data into the models and tie the answers and responses into other business applications and workflows.

Overcoming AI Adoption Challenges

There is no simple solution to overcome the many barriers to AI adoption. Many traditional vendors and AI companies offer a wide range of ways to capitalize on AI. Hyperscience advocates hyper-automation to empower organizations to navigate challenges along their GenAI adoption journeys, streamline document processing, and automate complex workflows.

Cisco recommends a more technological infrastructure approach. Its AI Readiness Index noted that most networks are ill-equipped to meet AI workloads, with only 21 percent of companies having the necessary GPUs to meet current and future AI demands. Further, as few as 13 percent feel ready to capture AI’s potential, yet 98 percent express urgency to deliver on AI and 85% believe they have less than 18 months to act.

“AI is making us rethink power requirements, compute needs, high-performance connectivity inside and between data centers, data requirements, security and more,” said Cisco’s Chief Product Officer Jeetu Patel. “Regardless of where they are on their AI journey, organizations need to be preparing existing data centers and cloud strategies for changing requirements, and have a plan for how to adopt AI, with agility and resilience, as strategies evolve.”

The post New Study Identifies Major Roadblocks to AI Adoption for Businesses — Are You Aware of Them? appeared first on eWEEK.

]]>
Can AI Be Trusted? 7% Rise In AI Optimism Challenges Traditional Skepticism https://www.eweek.com/news/can-ai-be-trusted-ethics-governance/ Sun, 08 Dec 2024 19:23:43 +0000 https://www.eweek.com/?p=230718 Can AI Be Trusted? Delve into the latest Deloitte study that reveals 54% of professionals believe AI poses the highest ethical risk, yet 46% see its potential for social good. Learn how to build trust in AI through governance and compliance.

The post Can AI Be Trusted? 7% Rise In AI Optimism Challenges Traditional Skepticism appeared first on eWEEK.

]]>
A recent Deloitte Consulting study into whether AI can be trusted asked what emerging technologies posed the highest potential for serious ethical risk. The State of Ethics and Trust in Technology report found that cognitive technologies like AI scored highest, at 54 percent, well above second-place digital reality, at 16 percent, and 40 percent of respondents cited data privacy as a top concern when it comes to generative AI (GenAI).

However, 46 percent of respondents said cognitive technologies also offered the most potential for social good, highlighting the technology’s ability to polarize opinions since it burst on the scene. Results show that suspicion and ethical concern toward artificial intelligence has dropped by 3 percent since 2023, while the hope that AI will ultimately prove to be a force for good rose by 7 percent.

Obviously, increasing familiarity has increased the comfort level of those in business and IT toward AI and GenAI. But the survey shows that the jury is still out. Accelerated adoption of GenAI appears to be outpacing organizations’ capacity to govern the technology and maintain ethical and privacy standards.

What Can Be Done?

Trust is never far from any discussion about AI, fed by decades of novelists and screenwriters churning out stories about AI going rogue. While the Deloitte study shows that ethical concerns linger, it also shows they might be starting to turn around. One way to nudge public perception along in the right direction is to help build trust.

“Respondents show concern for reputational damage to an organization associated with misuse of technology and failure to adhere to ethical standards,” the report said. “AI Is a powerful tool, but it requires guardrails.”

Businesses that add governance and compliance guardrails to their AI use can help get buy-in from employees and customers and strengthen trust in the technology.

Is AI Reliable?

Another study casts serious doubts over the accuracy of GenAI outputs. While accuracy falters on tasks humans would find challenging, a surprising finding was that GenAI lacks 100 percent accuracy on what would be regarded as very simple tasks.

“Scaled-up models tend to give an apparently sensible yet wrong answer much more often,” said study co-author Lexin Zhou, a researcher at Spain’s Polytechnic University of Valencia, “including errors on difficult questions that human supervisors frequently overlook.”

Deloitte recommends appointing Chief Ethics Officers to oversee AI as part of a larger effort to follow ethical best practices for the technology. These individuals would create processes for the safe and accurate use of AI while enforcing compliance, driving adherence to standards, and championing responsibility for ethical usage. For example, ethical principles can be embedded into software code, applications, and workflows.

“Embedding ethical principles early and repeatedly in the technology development lifecycle can help demonstrate a fuller commitment to trust in organizations and keep ethics at the front of your workforce’s priorities and processes,” said Bill Briggs, Chief Technology Officer at Deloitte Consulting.

The appropriate processes and guardrails should be in place to assure GenAI users that they can trust the reliability of outputs and that they are not inadvertently engaging in theft, plagiarism, or misuse of intellectual property (IP). Additionally, processes should be in place so that humans don’t blindly accept AI’s conclusions and responses as being 100 percent true.

“The increasing scale of GenAI adoption may increase the ethical risks of emerging technologies, and the potential harm of failing to manage those risks could include reputational, organizational, financial, and human damage,” Senior Manager of Deloitte Consulting Lori Lewis said. 

Learn more about generative AI ethics or AI policy and governance.

The post Can AI Be Trusted? 7% Rise In AI Optimism Challenges Traditional Skepticism appeared first on eWEEK.

]]>
AI Strategies Under Siege: Inefficient Archiving Solutions Skyrocket AI Energy Consumption https://www.eweek.com/news/archiving-inefficiency-spikes-ai-energy-consumption/ Sun, 01 Dec 2024 18:19:00 +0000 https://www.eweek.com/?p=230377 Discover how outdated archiving solutions are sabotaging AI strategies and escalating energy consumption.

The post AI Strategies Under Siege: Inefficient Archiving Solutions Skyrocket AI Energy Consumption appeared first on eWEEK.

]]>
Storage and archiving technology deficiencies could thwart AI initiatives, preventing artificial intelligence technology from achieving its full potential. While the industry is focused on other potential roadblocks—including the inordinate amounts of energy consumed by AI data centers, the lack of power availability, the scarcity of GPUs and high-powered CPUs, or the lack of data center capacity in key markets—the inefficiency of many archiving solutions is being given less attention.

This topic is the subject of a recent report from the Active Archive Alliance (AAA), “How Active Archives Support Modern AI Strategies.”

“While much of the focus of AI adopters has been on the front end of data processing and analytics, the sustainability of AI workflows must now address the long-term retention and protection of what will be massive and persistent volumes of data,” said Rich Gadomski, Head of Tape Evangelism for FUJIFILM North America and AAA co-chair. “A modern strategy is needed to manage the growth and volume of data, and this can be provided by a sensible active archive implementation coupled with intelligent data management.”

Consider the size of many large language models (LLMs). All the data they analyze needs to be stored somewhere. Companies with unlimited budgets can afford to keep a lot of it in memory and the rest on solid-state drives (SSDs), but for most organizations, the price makes that out of the question. Even keeping all the data on spinning disks is an expensive way to go when you are dealing with the vastness of AI data repositories.

According to Furthur Market Research, storage capacity surpassed one zettabyte (ZB) in 2016, 4.8 ZB in 2022, and is expected to reach 50 ZB by 2035. Clearly, AI energy consumption is destined to become a real challenge.

That’s where an active archive comes in. It provides organizations with an intelligent data management layer that can move data where it belongs based on activity, cost, and performance. Data needed by AI applications is shifted to where it can be rapidly analyzed. Otherwise, it sits in a lower storage tier such as hard disks, optical disks, or tape. The data layer ensures there are never long delays in waiting for data to be made available. Automated tiering takes care of data movement. This contains costs and provides eco-friendly long-term storage or performance storage as needed.

As organizations create LLMs and originate AI applications, the need for more storage and efficient archiving will only increase. The topic of AI and energy consumption is not going away, and that will gradually bring storage and archiving discussion to the fore.

“AI will accelerate demand for active archive storage,” said Mark Pastor Director, Platform Product Management at Western Digital. “Archived data will have more value than ever before and will therefore need to be stored for a long time and actively accessed during its life.”

The post AI Strategies Under Siege: Inefficient Archiving Solutions Skyrocket AI Energy Consumption appeared first on eWEEK.

]]>
3 New Duos Edge AI Data Centers Launch Amid Exploding AI Infrastructure Demand https://www.eweek.com/news/new-duos-edge-ai-data-centers-launch/ Sun, 01 Dec 2024 16:12:00 +0000 https://www.eweek.com/?p=230373 Discover how three new Duos Edge AI data centers are revolutionizing the tech landscape as demand for AI infrastructure skyrockets.

The post 3 New Duos Edge AI Data Centers Launch Amid Exploding AI Infrastructure Demand appeared first on eWEEK.

]]>
AI demand is driving an increased need for data center capacity that is already outstripping the supply. According to research firm Omdia, the installed power capacity of data centers needs to reach 170 GW by 2030, of which almost 50 percent will be for AI data centers. Achieving that target requires doubling data center power capacity compared to a few years ago.

“As more power is dedicated to AI, the share of worldwide electricity in the data center is rising sharply,” said Vladimir Galabov, Research Director of Cloud and Data Center at Omdia.“New data centers are optimizing their physical infrastructure for AI.” Demand will only continue to grow as more AI companies look to power their technology.

One area of optimization is shifting a significant portion of computing power to the edge. Accordingly, Duos Technologies just acquired three new edge data centers (EDCs) destined for the Texas market. Accu-Tech built the EDCs, which are ready for deployment wherever they are needed. The company plans to provide 15 more by the end of 2025.

The three EDCs will be operated by a Duos subsidiary known as Duos Edge AI. They will provide low-latency, high-speed internet access that can be tailored for remote districts that currently lack the capability to process AI data. Schools and public institutions in Texas are among the likely beneficiaries. The goal is to bring advanced technology to underserved communities and rural industries by deploying high-powered edge computing solutions.

“These three EDCs are expected to go live by the end of Q1 2025, marking a significant step forward in the Texas digital transformation,” said Doug Recker, president of Duos Edge AI. “By focusing on providing scalable IT resources that seamlessly integrate with existing infrastructure, these EDC solutions expand capabilities at the network edge.”

AI is already exposing the limitations of current data center infrastructure. According to the Uptime Institute, the average size of data center racks is less than 10 kW. AI demands far more power than that. The latest GPUs, AI applications, and networking infrastructure are hungry for more. Rack densities in AI data centers are surging. Duos Edge AI can provide 100 kW per cabinet or more, as well as a deployment within 90 days, and positioning its edge data centers within 12 miles of end users or devices. This is significantly closer than traditional data centers.

The Texas economy is booming. Unlike other areas of the country, it offers abundant access to energy. Thus, the Lone Star State’s IT sector is very much on the ascendancy. These EDCs are likely to be gobbled up quickly.

Read our guide to understanding AI energy consumption for more on this growing issue.

The post 3 New Duos Edge AI Data Centers Launch Amid Exploding AI Infrastructure Demand appeared first on eWEEK.

]]>
Omnia’s AI Readiness Report 2025: 28% of Data Centers Aren’t Ready for AI https://www.eweek.com/news/ai-readiness-gap-data-centers-unprepared/ Tue, 26 Nov 2024 18:05:00 +0000 https://www.eweek.com/?p=229936 AI readiness report to see if your organization is AI-ready. Explore our comprehensive guide on AI readiness reports and take the first step today.

The post Omnia’s AI Readiness Report 2025: 28% of Data Centers Aren’t Ready for AI appeared first on eWEEK.

]]>
Businesses are investing huge amounts in artificial intelligence technology, with vendors pushing it to customers and integrating it into nearly every software platform or application. But how many are actually ready for AI? According to an Omnia Strategy Group (OSG) analysis, only 28 percent of data centers are prepared to accept AI workloads and provide AI services that offer high performance levels. 

For the rest, some of the biggest challenges include the following:

  • Lack of graphics processing units (GPUs) and high performance CPUs 
  • Insufficient power from the grid
  • Limiting internal power distribution infrastructures
  • Cooling infrastructures can’t keep AI racks from overheating 
  • Not enough space to introduce liquid cooling 
  • Lack of internal AI expertise 

“The economic implications of this readiness gap are profound,” said Omnia CEO Jessica Marie. “This lack of preparedness signals an urgent need for investment in digital infrastructure and highlights the critical role of APIs in supporting AI integrations.”

Despite these challenges, OSG’s AI report shows that 86 percent of businesses believe that AI will reshape global digital infrastructure, with most rushing to adopt AI any way they can. The findings make it clear that organizations must prioritize readiness, invest in infrastructure, and take a more strategic approach to implementation. Those who address these challenges head-on will be better positioned to leverage the full potential of emerging technologies in the years to come.

API and Infrastructure Constraints 

Application programming interfaces (APIs) let different software applications communicate with each other and make it easy for platforms such as Amazon Web Services (AWS) and Microsoft Azure to integrate with vast numbers of other applications. APIs act as the go-between, providing a standardized method of interaction and integration. These APIs are already being used widely to combine AI with existing data analytics applications.

According to the Omnia report, 46 percent of businesses believe that integrating AI and machine learning workloads into existing operations will be difficult. APIs should ease the addition of AI into the enterprise and facilitate AI readiness, but confidence is relatively low that the current state of the technology will be sufficient. Only 27 percent of respondents think that current API management platforms will be able to support the complexities of AI integration over the next decade. Nearly half of potential AI users are also worried about the problems AI might cause in areas such as security, privacy, compliance, and automation. Many have realized that legacy infrastructure and IT complexity are likely to inhibit their AI ambitions.

The Geopolitical Implications of Artificial Intelligence

Some consider AI will herald a new era of happiness and productivity for mankind. Others see AI leading to a stark dystopian tomorrow. Certainly, geopolitical disruption is likely. Omnia numbers in the AI readiness report show that 79 percent see AI as having a significant geopolitical impact and as a major disruptor in the competitive landscape across the planet.

Any nation that lags behind in AI adoption and integration could find itself at a disadvantage politically, economically, and militarily. According to Omnia, 74 percent believe Artificial Intelligence will play a pivotal role in addressing climate change, future pandemics, and other global challenges. Most accept that disparities in AI capabilities between nations are likely to lead to conflict and heighten competitive dynamics.

“84 percent of respondents think that control over digital infrastructure will be very or extremely important in determining geopolitical influence in the coming decade,” said Marie. “This underscores how technological capabilities are increasingly shaping global power dynamics.”

The post Omnia’s AI Readiness Report 2025: 28% of Data Centers Aren’t Ready for AI appeared first on eWEEK.

]]>
Sam Altman AGI Prediction Boldly Claims Machines Will Think Like Us As Soon As 2025 https://www.eweek.com/news/sam-altman-agi-macine-prediction/ Mon, 25 Nov 2024 11:15:00 +0000 https://www.eweek.com/?p=230141 Altman's AGI prediction: Machines matching human intelligence by 2025. OpenAI CEO claims path is clear, but FrontierMath shows AI still struggles with novel problems.

The post Sam Altman AGI Prediction Boldly Claims Machines Will Think Like Us As Soon As 2025 appeared first on eWEEK.

]]>
OpenAI Ceo Sam Altman believes artificial intelligence (AI) will be at least up to par with human intelligence and perhaps in advance of it within a year. He revealed the reason for his optimism during a recent interview: a form of AI known as artificial general intelligence (AGI).

If Altman is right, AI will arrive at its highest level of AGI much faster than previously envisioned. Most predictions thought AGI would take a decade to achieve. After all, AGI requires AI to do all of an organization’s work independently—for example, competent self-driving cars with no glitches, a supermarket managed completely by AI, or a war being waged by AI without the need for generals to call the shots. Altman thinks we can achieve AGI by 2025, and that getting there is purely an engineering problem.

In the 1960s, the Jetsons TV show envisioned flying cars. In the 1980s, the movie Blade Runner predicted androids more physically capable than humans—and more intelligent—by 2011. However, both are decades away or more in reality. It’s impossible to say whether Altman is being overoptimistic, or trying to justify his company’s work.

What is realistic? Expect the large language models (LLMs) behind current generative AI (GenAI) applications to continue to evolve, growing more capable, more accurate, and more focused on specific datasets.

“While LLMs provide powerful general capabilities, they are not equipped to answer every question that pertains to a company’s specific business domain,” said Mohan Varthakavi, Vice President of AI and Edge at cloud database firm Couchbase. “Businesses will adopt hybrid AI models, combining LLMs and smaller, domain-specific models, to safeguard data while maximizing results.”

If it is ever attainable, arriving at an AGI-like state will require a complete rework of the entire IT infrastructure and existing data structures. It entails a new kind of AI-enabled coding and application creation, more efficient data centers customized for AI, and a whole lot more.

“The long-term future is a comprehensive transformation where every application—small, medium, and large—is going to be revised and rewritten using AI,” said Varthakavi. “This sweeping movement will mark a fundamental shift from bolt-on solutions to ground-up redesigns, as organizations recognize the benefits of building truly AI-first applications that can fully harness the technology’s capabilities.”

Rome wasn’t built in a day, and AGI won’t be built in a year—or maybe even 10. Only time will tell. In the meantime, Altman’s AGI prediction should be viewed as an indication that AI innovation is moving at a more rapid pace than even he expected and that some breakthroughs lie ahead in the very near future. Watch the full interview on Y Combinator’s YouTube channel to learn more.

The post Sam Altman AGI Prediction Boldly Claims Machines Will Think Like Us As Soon As 2025 appeared first on eWEEK.

]]>
Amazon’s $110M Generative AI Investment Fuels University Research https://www.eweek.com/news/amazon-generative-ai-investment-university-research/ Mon, 25 Nov 2024 03:00:00 +0000 https://www.eweek.com/?p=230156 Amazon's $110M generative AI investment boosts university research, providing advanced tools and resources to drive AI innovation in academia.

The post Amazon’s $110M Generative AI Investment Fuels University Research appeared first on eWEEK.

]]>
Enthusiasm around generative AI has produced a large number of AI startups and is fueling massive investment that Goldman Sachs predicts will surpass $1 trillion over the next few years. Amazon is just the latest to put its money where its mouth is, announcing a $110 million investment into generative AI to fund the Build on Trainium program. Build on Trainium will provide compute hours for researchers to envision, experiment with, and create new AI architectures, machine learning (ML) libraries, and performance optimizations designed for large-scale, distributed AWS Trainium UltraClusters. Trainium UltraClusters are essentially cloud-based collections of AI accelerators that can be unified into one system to deal with highly complex computational tasks.

Built on AWS Trainium Chips

The AWS Trainium chip is tailored for deep learning training and inference. Any AI advances that emerge from this Amazon generative AI investment will be made broadly available as open-source offerings. Researchers can tap into the Trainium research UltraCluster, which has up to 40,000 Trainium chips optimized for AI workloads—far more computational power than they could ever hope to afford or assemble locally within academic institutions.

Because high-performance computing resources, graphics processing units (GPUs), and other elements of the AI arsenal don’t come cheap, budget constraints could stall AI progress. This Amazon AI investment will help some university-based students and researchers overcome such constraints. One example is the Catalyst research group at Carnegie Mellon University (CMU) in Pittsburgh, Pennsylvania, which is using Build on Trainium to study and develop ML systems and develop compiler optimizations for AI.

“AWS’s Build on Trainium initiative enables our faculty and students large-scale access to modern accelerators, like AWS Trainium, with an open programming model,” said Todd C. Mowry, a professor of computer science at CMU. “It allows us to greatly expand our research on tensor program compilation, ML parallelization, and language model serving and tuning.”

To hasten the trajectory of AI innovation, Amazon has also been investing in its own technology to make the lives of researchers easier. For example, its Neuron Kernel Interface (NKI) makes it far simpler to achieve direct access to AWS Trainium instruction sets. Researchers can quickly build optimized computational units as part of their new models and Large Language Models (LLMs). One of the first breakthroughs you can expect to see is more focused, smaller-scale LLMs.

“Small, purpose-built LLMs will address specific generative AI and agentic AI use cases,” said Kevin Cochrane, CMO of cloud infrastructure provider Vultr. “2025 will see increased attention to matching AI workloads with optimal compute resources, driving exponential demand for specialized GPUs.”

The post Amazon’s $110M Generative AI Investment Fuels University Research appeared first on eWEEK.

]]>
AI Agents Set to Transform Customer Experience Landscapes https://www.eweek.com/news/ai-agents-transforming-experience/ Wed, 13 Nov 2024 18:29:47 +0000 https://www.eweek.com/?p=229872 AI agents are at the forefront of a customer service revolution, reshaping interaction dynamics with enhanced personalization and real-time efficiency.

The post AI Agents Set to Transform Customer Experience Landscapes appeared first on eWEEK.

]]>
Businesses are increasingly using artificial intelligence agents to replace or augment the work of human customer service representatives. AI is not the first technology to bring change to the role—it’s evolved over the years to accommodate phone, email, chat, and social media—but the changes it is introducing are dramatic, with the autonomous agents taking on significant portions of the customer service load. Understanding how businesses can integrate agentic AI can make their customer service programs more efficient and more effective while improving customer satisfaction rates. Here’s what you need to know.

KEY TAKEAWAYS

  • AI agents are revolutionizing businesses in various applications across industries, from contact centers to financial services to healthcare and more. (Jump to Section)
  • Agentic AI receives human input and determines the best action to answer the question, complete the demand, or meet the need, making it useful for accomplishing tasks and acting as a personal assistant. (Jump to Section)
  • Limitations of AI agents include the potential for over-reliance on them and the risk of physical danger posed by using them in healthcare or self-driving cars. (Jump to Section)

What Is an AI Agent?

AI agents are task-driven virtual agents that replace or augment the work of human agents. Many can operate without human intervention for a wide variety of tasks. Automation backed by intelligence is a key part of AI agent functionality.

How Do AI Agents Work?

Agentic AI uses environmental and contextual clues to solve customer or individual problems. Think of how voice prompt systems work—a keyword in the customer’s spoken output generally acts as a trigger to determine a response. While traditional voice prompt systems tend to be robotic and limited, adding AI greatly increases the chances that the system will match the customer request with an accurate or appropriate response.

The AI determines what is needed and sets a series of tasks into motion. For example, it may request that items be ordered and shipped, that inventory be replenished, or that the situation be escalated to a human to address a more complex problem or serious customer dissatisfaction that the AI cannot resolve independently. This is accomplished with various technologies, including machine learning, generative AI, large language models, and neural networks. These advanced analytics-based systems are pre-trained on huge datasets and tested before being unleashed into the real world. Feedback helps improve accuracy over time.

Some autonomous AI agents are fairly simplistic and designed to accomplish basic tasks such as sending acknowledgment emails whenever someone fills out a form. Others incorporate historical data to make decisions or suggestions concerning the type of product the person may be calling about or a task that needs to be done. More advanced agents try to interpret needs and react in real time as a human agent would.

“Functions such as the user experience, form development, conversational interface development, API automation, intelligent document processing, mining, task mining, process design and execution, business rules management, and workforce management can all be enhanced with AI,” said Amardeep Modi, Vice President of Everest Group.

Key Features of AI Agents

AI agents are designed to fulfill different purposes, which means there’s a lot of variability of features among them depending on the task they’re meant to serve. Some of the most common features include the following:

  • Self-Service: According to a recent global study conducted by Cisco, there is a link between customer satisfaction and effective self-service tools. With agentic AI, people can find what they are looking for without waiting for a rep to answer the phone or wading through a series of documents in a database.
  • Autonomy: Agents are programmed to perform certain tasks without human intervention, approval, or supervision.
  • Self-Improvement: As AI agents gain more experience, they can adapt their responses and actions to increase accuracy or understand context.
  • Application Integration: Agents use APIs to access applications and carry out tasks. For example, they can open an email app to send data or an acknowledgment via email, open a calendar and book an appointment, etc.
  • Multi-Step Tasks: Many AI agents are good at one repetitive task, but the latest generation has evolved to the point where agents can handle multiple talks. This might include taking orders and following them up with related tasks, like alerting shipping, acknowledging the order via email, and assigning follow-up tasks to others in the CRM system.

How Will Agentic AI Revolutionize Business?

It’s difficult to estimate the number of ways AI agents can transform a business, and more are being developed almost daily. To date, some of the most common applications include contact centers, financial applications, data collection and analysis, task and project management, personal assistance, and more.

Contact Centers 

A recent Webex study found that only 25 percent of people were satisfied with their last contact center service experience, and 94 percent abandoned interactions due to poor experiences. Customer service applications for AI agents can improve on that. For example, the Cisco AI Assistant for Webex Contact Center leverages conversational intelligence and automation to enhance customer interactions, streamline issue resolution, offer more empathetic customer satisfaction, and enhance brand loyalty.

“Customer experience can make or break a brand,” said Jeetu Patel, Cisco’s Executive Vice President and Chief Product Officer. “In the next few years, a large majority of first-time calls will be handled by an AI Agent that will be just as interactive, dynamic, engaging, and personable as a human agent.”

Complex Task Assistance

New AI apps can handle complex tasks simultaneously, such as booking flights, hotels, and airport parking. Anthropic, for example, offers agents and agent development tools that access other applications and carry out a sequence of related tasks, including booking flights, scheduling appointments, researching online, and completing expense reports.

Financial Applications

AI agents are finding their way into banking and financial applications courtesy of AI startups and innovators like Snaplogic. The company helped Independent Bank Corp. of Michigan create AI agents to assist workers, automate processes for fraud detection, reduce IT help desk tickets by correctly handling inquiries, and make real-time adjustments to financial strategies based on evolving market conditions.

Data Collection and Analysis

AI agents are being used to find data on the web and from internal databases, forms submitted online, social media, and other sources. This eliminates immense manual labor in compiling data, combining spreadsheet data, and integrating data sources. AI is also used to slice, dice, crunch, bunch, and draw insights from data in conjunction with data science and analytics applications to expand their reach. For example, ChatGPT and other generative AI tools bring the internet into the realm of analysis.

Personal Assistants

AI agents can perform functions normally done by humans—for example, many of the tasks and services performed by personal assistants. Users can ask their phones or computers to take specific actions, like sending emails, booking flights, canceling appointments, or sending flowers. Auto-GPT, for example, is used by some to create personal assistants based on GPT-4.

Task Management

AI agents can help keep track of which tasks are completed, which are in progress, and which need to be done or redone. For example, BabyAGI creates autonomous AI-powered task management, and AgentGPT can create, complete, and learn from various tasks.

Crypto Applications

AI agents are being incorporated into blockchain technology. These agents can use cryptocurrencies to complete purchases and enhance their capabilities, opening up opportunities for agents to deal with financial transactions. How effective are they? In one public example, an AI agent convinced a venture capitalist to invest $50,000.

Software Coding

Some believe that generative AI (GenAI) can be used to develop software code, signaling the end of the software developer. However, initial results have been less than promising, and only 27 percent of GenAI users report using it to create software programs.

“ChatGPT is not reliable for software development,” said Michael Azoff, Chief Analyst at Omdia, who doesn’t recommend using GenAI for that purpose. “There are other tools out there specifically designed for coding. AI is not about replacing developers or other IT functions. It’s about augmenting them by providing useful tools.”

That said, agentic AI can help developers in other ways. For example, it can help compile software modules based on existing components, like compiling shopping cart and payment options into a new eCommerce application.

Driving

On the futuristic side, AI agents could someday be used in self-driving cars. The driver or vehicle manager could instruct the agent to take the car to a specific location or bring it to the garage for service. However, much work still needs to be done in image recognition, LIDAR interpretation, decision-making, and vehicle control.

Chat 

While automated chat has been around for a while, AI is making it a more sophisticated experience by enabling it to respond to emotion, better understand context, and eliminate the more robotic aspects of traditional chat applications.

Healthcare

In healthcare, AI agents are being used for remote patient monitoring, freeing up nursing and medical staff and alerting them when something serious occurs or it is time for a checkup. Similarly, these agents can compile and summarize healthcare data for individuals and large groups. The resulting data analysis might raise the accuracy of diagnoses and promote better patient outcomes.

Manufacturing

The industrial sector is ahead of many other areas in deploying AI agents. Many operate as the software equivalent of an industrial robot, helping organize product assembly, transportation, floor management, data collecting and analysis, and workplace safety compliance.

For example, water heater manufacturer A.O. Smith harnesses AI agents from UiPath. Business Process Optimization Manager Diana Swain uses AI agents to extract, interpret, and process data from forms, PDFs, images, handwriting, scans, and checkboxes using UiPath Document Understanding. AI agents can recognize content, and pre-trained machine learning models add a higher level of intelligence, interpretation, and the ability to trigger actions based on content identification.

“We were having lots of document problems due to bad handwriting, documents written in different formats and platforms, and legacy and new applications being used for invoicing,” Swain said. “Document understanding has already freed up 7,200 hours per year that can be used for more strategic and fulfilling work.”

Limitations of AI Agents

While agentic AI can bring many benefits and a better experience, there are also a few potential downsides. Here are some of the most common:

  • Overreliance (Use it or Lose it): If people delegate everything to AI agents, they may lose sight of their own capability to perform those actions, fail to pass those talents onto the next generation, and basically become so dependent on agents that nothing gets done when there is an outage.
  • Malicious Acts: Nearly every science fiction movie with an AI plot has the AI going rogue or being infiltrated. If a nefarious insider or hacker surreptitiously changes the code, or the AI gets smart enough to change its own code, an autonomous AI agent could do things that might be seen as malicious.
  • Corrupt and Insecure Models: The widespread use of AI agents can cause security and privacy issues. The models behind AI can be corrupted or hacked or, at times, provide misinformation. They might also violate privacy rules, and some users might use them in ways that open the organization to attack.
  • Physical Safety: AI agents are putting lives on the line in healthcare and in self-driving vehicles, where one small error can have serious consequences.

How to Prepare for AI Agents in the Workplace

Some organizations may resist the presence of AI agents due to the risks they introduce, but AI agents are coming, and they’re destined to be deployed in many areas of work and life. The following tips can help you prepare:

  • Implement Them Slowly: AI agents should initially be added where they can accomplish the most gain for the least cost. This will allow the organization to become comfortable with the technology and plan to implement it more broadly.
  • Offer Staff Training: Personnel need to be educated on how to use AI agents, what the guardrails are, and how to stay in control.
  • Define an Effective Use Policy: Drafting and implementing good policy, distributing it broadly and training staff on its use, and enforcing it organization-wide can prevent or mitigate many security and ethical challenges AI agents pose.

Bottom Line: Balance Productivity with Oversight for AI Agents

AI agents can replace humans in certain functions. They can augment human activity, eliminate drudgery, and complete tasks faster than humans–but like generative AI tools, they should always be viewed as a tool to augment human efficiency, not as a replacement. Businesses that embrace their use can benefit from their potential to revolutionize the workplace. However, they should be closely monitored to ensure they don’t stray from their intended purpose.

Read our article about the use of AI in contact centers to learn more about real world applications for this dynamic technology.

The post AI Agents Set to Transform Customer Experience Landscapes appeared first on eWEEK.

]]>
NVIDIA Blackwell AI Chip Shortage: Sold Out for Next 12 Months Due to Skyrocketing Demand https://www.eweek.com/news/nvidia-blackwell-ai-chip-shortage/ Fri, 25 Oct 2024 11:01:44 +0000 https://www.eweek.com/?p=229447 With no more NVIDIA Blackwell GPUs available for another year, what will companies do if they want to obtain the compute power required for AI applications and models?

The post NVIDIA Blackwell AI Chip Shortage: Sold Out for Next 12 Months Due to Skyrocketing Demand appeared first on eWEEK.

]]>
Chipmaker NVIDIA recently announced that its latest Blackwell graphics processing units (GPUs) are sold out until the end of 2025, snapped up by the customers—including Meta, Microsoft, Google, Amazon, and Oracle. The AI leaders’ deep pockets and large-volume orders make it difficult for smaller companies to compete and relegating them to a year-long wait. It’s too early to know the effect this shortage will have on the tech industry—especially now, with the GPU-heavy development of artificial intelligence booming—but it might lead to competitive disadvantages, a possible black market, or an opportunity for NVIDIA’s rivals to gain ground on the longtime leader. 

KEY TAKEAWAYS

  • Unprecedented demand for NVIDIA Blackwell GPUs is being driven by the AI boom. (Jump to Section)
  • Impact on the market isn’t clear, but it’s likely to provide an opening to competing chipmakers and disadvantage smaller AI companies. (Jump to Section)
  • Rivals are circling, hoping to capitalize on this delay and tempt users to alternative solutions. (Jump to Section)

What is the NVIDIA Blackwell Chip?

NVIDIA has grown to prominence over the last couple of years due to heavy demand by AI developers for its H100 and GH200 chips. The company designed its next-generation GPUs, Blackwell B200 and GB200, for demanding data center, AI, and high-performance computing (HPC) applications. The B200 improves on the previous generation’s 80 billion transistors and 4 petaflops with more than 200 billion transistors and 20 petaflops, and packs in almost 200 GB of HBM3e memory to deliver as much as 8 TB/sec of bandwidth to provide the processing power required by high-end data applications like AI.

What Factors are Causing the Shortage?

The NVIDIA Blackwell’s sheer power is an obvious factor in the heavy demand for the chip, but other elements are also at play. Chipmakers have been on a headlong rush to push the thermal design power (TDP) limits for microchips. Interest in AI is booming in parallel, popularized by the rise of ChatGPT and other generative AI applications over the past two year, and data centers are scrambling to upgrade facilities in preparation for demand.

They need GPUs, high-powered CPUs, as much memory as they can assemble, the fastest possible interconnects, and immense amounts of networking bandwidth to be able to facilitate AI workloads. The entire technology stack will need to up its game to avoid becoming a bottleneck for large language model (LLM) processing, but GPUs—which lie at the core of the data center infrastructure required to serve the needs of AI—are critical components. Demand just outstripped supply.

How Might the NVIDIA Shortage Affect Users and Markets?

Current market conditions for NVIDIA’s Blackwell GPUs are grim for many potential customers. The cozy relationship between hyperscalers, tech giants, and NVIDIA is leaving smaller companies a year or more behind in implementing and executing AI and data center upgrade plans. This could lead to competitive disadvantage for smaller companies and greater dominance for current big players.

A GPU black market might emerge, or NVIDIA rivals might gain market share with Blackwell alternatives. NVIDIA has long been the dominant chipmaker, but this could potentially be the point that makes or breaks the company as a market force. If it can’t scale up and maintain quality to meet demand, it could falter in the market or open itself to a takeover. It remains to be seen how this abrupt cessation in GPU to delivery to all but the privileged few will play out, but it’s clear that too much is at stake for the rest of the market to sit idly by for a year in the hope of obtaining some precious NVIDIA treasure.

When Will NVIDIA Blackwell Be Back in Stock?

According to the latest projections, NVIDIA AI chips won’t be back in stock until the end of 2025. However, if Meta or one of the hyperscalers suddenly decides it needs another 100,000 GPUs, they could buy out stock before a smaller company even has a chance. Whether NVIDIA would serve smaller companies first or push them down in the queue if more large orders come in from tech giants and hyperscalers remains to be seen.

“As we progress to meet the increasing computational demands of large-scale artificial intelligence, NVIDIA’s latest contributions in rack design and modular architecture will help speed up the development and implementation of AI infrastructure across the industry,” said Yee Jiun Song, vice president of engineering at Meta.

Another factor to consider is certainty of delivery. After a recent shopping delay for the Blackwell chip due to a packaging issue, the company had to redesign how its GPU was integrated with other components within the chipset to avoid warping and system failures. Only time will tell whether the current design will stand up to the rigors of mass production while maintaining quality.

Are There Any NVIDIA Blackwell Alternatives?

A number of competitors are standing in the wings, ready to capitalize on customer frustration about long lead times, including Intel and AMD. Earlier this year, Intel introduced a new AI chip that it claims rivals the performance of the NVIDIA H100 processor for AI workloads. Intel claims its Gaudi 3 chip can exceed its NVIDIA rival in the training and deployment of generative AI models with 40 percent more power efficiency, 50 percent more inference speed, and more than one-and-a-half times the training speed for large language models (LLMs)

Instead of packing more punch into every square millimeter of silicon to produce the biggest LLMs around, some manufacturers are taking a different approach. AMD, for example, has released a small language model called the AMD-135M trained on AMD Instinct MI250 accelerators based on the AMD Ryzen AI processor. 

Similarly, ThirdAI trained its Bolt LLM using only CPUs—instead of using 128 GPUs as GPT-2 does, for example, the company used 10 servers each with two ThirdAI Sapphire Rapids CPUs to pretrain Bolt on 2.5 billion parameters within 20 days. That makes Bolt 160 times more efficient than traditional LLMs, according to Omdia analyst Michael Azoff. “Smaller models mean lower cost and lower power and cooling demands on the data center,” he said.

NVIDIA Chip Shortage: What Does it Mean?

NVIDIA is king of the castle at the moment, dominating the market for compute-intensive processing—but with much of the IT and business world being forced to wait a year or more for its in-demand product, rivals are lining up to try to fill the delivery vacuum. Some are already not far behind NVIDIA’s AI chips, and they’re closing ground fast. While demand remains high, NVIDIA may prove to be a victim of its own success. It remains to be seen whether the company will further assert its dominance or if others will take advantage of the delivery delays to serve AI demands with alternative solutions.

See how two of the most popular generative AI art tools compare in our head-to-head comparison of Runway vs. Midjourney.

The post NVIDIA Blackwell AI Chip Shortage: Sold Out for Next 12 Months Due to Skyrocketing Demand appeared first on eWEEK.

]]>
AI Death Calculators Claim to Predict Lifespan with 79% Accuracy https://www.eweek.com/news/ai-death-calculator-predicts-lifespan/ Wed, 09 Oct 2024 19:14:38 +0000 https://www.eweek.com/?p=229043 An AI-powered death calculator claims to predict your lifespan using data analysis, raising ethical concerns about its accuracy and societal impact.

The post AI Death Calculators Claim to Predict Lifespan with 79% Accuracy appeared first on eWEEK.

]]>

KEY TAKEAWAYS

  • Knowing the factors that lengthen or shorten lifespan could result in positive changes in people’s lives. (Jump to Section)
  • However, some people may be adversely affected by the knowledge and feel it is too late to change their life trajectory. (Jump to Section)
  • Predictions are not wholly accurate, and there are ethical as well as privacy concerns (Jump to Section)

AI death calculators like Life2vec have been in the news for for their creators’ claims about their ability to predict people’s deaths by estimating lifespans based on habits, diets, medical histories, lifestyles, and other factors. These tools use artificial intelligence algorithms to compare user data against known medical data to make guesses about likely lifespans with the stated goal of encouraging proactive health choices. In some cases, however, experts worry the results may induce fear instead. There are also privacy and cybersecurity concerns around the provision of so much health data. Here’s what you need to know about AI death calculators.

How Do AI Death Predictors Work?

AI death calculators use artificial intelligence algorithms that take such personal factors as age, health habits, and family history into account to make predictions about a person’s likely demise. For those fascinated with minute details and decisions about food, exercise, caloric intake, and lifestyle, AI adds a level of in-depth management of the tiny factors that add up to a longer life or that erode lifespan. Some of these tools make it possible to integrate with wearable health and fitness devices or to upload blood tests, genetic profiles, and other personal health documents. Alerts and up-to-the-minute health data can be used to add more precision to lifespan estimates, elevating the calculator to a new level of customized advice designed to improve health based on specific data.

Most AI death prediction calculator apps are free. Some charge for additional data and services—for example, the Death Clock charges to deliver not only the year but the exact date of a person’s death along with their current biological age. They’re also relatively easy to use, providing you are willing to enter all the required data, including age, weight, height, daily calorie intake, and exercise level. AI processes the user’s information by comparing it against extensive medical data and health studies to deliver an estimated lifespan range and personalized suggestions to enhance their well-being. Those who pay attention to calorie intake and can answer with the most certainty will benefit from a closer approximation of their expected lifespan, while guesses will lead to less accurate results.

How Accurate Are AI Death Calculators?

AI death predictions should be viewed as estimations informed by data and AI analysis, not guaranteed predictions. They are best considered a framework for healthier lifestyle decision-making. Nevertheless, the degree of precision is better than any previous attempt and it can be expected to steadily improve over time.

Life2vec can predict lifespan with 78 percent accuracy using details like health, income, and profession,” said Mukund Kapoor, a content analyst for AI implementation specialist Weam. “Factors such as being male, having a mental health diagnosis, or working in a skilled profession are linked to earlier death, while higher income and leadership roles correlate with a longer life.”

The methodology used by life expectancy apps is largely based on a Danish study published in 2023. Researchers used natural language processing techniques to study the evolution and predictability of human lives and examine a larger series of life events across more than 10 years related to health, education, occupation, income, address and working hours. This study used day-by-day records of six million Danes, going far beyond any previous attempt to model lifespan.

However, modeling is an inexact science as shown by the margin of error that exists in models of hurricane path climate change. There are just too many factors to take into account, as well as the occasional tendency of AI to hallucinate. “Just like words in sentences, events follow each other in human lives,” said Sune Lehmann, a professor at the Technical University of Denmark and leader of the team behind the study.

What Factors Add Uncertainty or Inaccuracy to Death Predictions?

Because life outcomes are influenced by multiple variables, they’re never entirely predictable. AI death predictions are estimates based on averages determined by reviewing personal data against large amounts of health data. If a user smokes or drinks, those factors might introduce some bias into the data, for example. While the bias is supported by medical studies, it’s not a given—heavy smokers and regular drinkers can live to old age. However, those that regularly engage in smoking or drinking can expect on average to have their number of years diminished in calculations.

Other factors may also be weighted. For example, a non-smoker who lives with smokers might get a lower score than someone who has avoided smoking and smokers their entire life. Some second-hand smoking impact calculations and other factors like them are based on medical information, and some is relative guesswork. Accidents, natural catastrophes, criminal acts, and a host of random factors can bring an end to what might otherwise have been a long life. There’s too much chance and too many influences at play to consider the AI death prediction calculator anything but a decent guess based on some known factors and data.

What are the Ethical, Privacy, and Cybersecurity Concerns of Death Calculators?

The entire concept of AI death calculators brings several issues to the surface, not least of which is the risk of handing over so much personal data to a third party or transmitting it to the cloud. 

“Some may use this to get subscription dollars, and it may not be clear what the data is being collected or used for, and who has access to it,” said Greg Schulz, an analyst for StorageIO Group. “People should be concerned about what information is being collected and shared with whom, and how it is being used.”

There’s no guarantee that insurance companies won’t find a way to access and use the data to deny life insurance claims or increase premiums. Governments have a knack of pressuring companies to hand over data, and unscrupulous individuals could abuse such data if it came into their possession. A data breach could also lead to personally identifiable information being used by criminals for a variety of unsavory purposes. Celebrities, business leaders, and government officials could be subject to blackmail or public humiliation based on the data they turn over.

Bottom Line: AI Death Calculators

AI death calculators like Life2vec may help you become more conscious of lifestyle choices or encourage positive changes. They may also cause anxiety or dismay. While the AI tools consider a lot of factors about your likely health and well-being and review them against medical science, science is unable to make entirely accurate predictions, and it’s important to keep this in mind. When used for fun or to motivate yourself to make positive changes in your life, diet, or health, AI death calculators can be a useful tool, but make sure you weigh that value against the risk of sharing so much personal data.

Learn more about the trends driving the current market of artificial intelligence tools and where it’s likely to head in the future.

The post AI Death Calculators Claim to Predict Lifespan with 79% Accuracy appeared first on eWEEK.

]]>