Deloitte and Snowflake AI Trends in 2024

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

E-MAGAZINE

AI in 2024:
Hot Takes From Dataiku,
Deloitte, & Snowflake
A Coming Era of
Compute Cost
Volatility
Florian Douetteau
Bio
CO-FOUNDER AND CEO, DATAIKU
Florian Douetteau is the Chief Executive Officer and
co-founder of Dataiku, the platform for Everyday AI,
enabling data experts and domain experts to work
together to build data into their daily operations,
from advanced analytics to Generative AI. More
than 600 companies worldwide use Dataiku, driving
diverse use cases from predictive maintenance
and supply chain optimization, to quality control in
precision engineering, to marketing optimization,
Generative AI use cases, and everything in between.

Florian started Dataiku in 2013 out of his


passion for data, machine learning, and people.
He envisioned a future for businesses with AI
becoming mainstream through the collaborative
effort of everyone in the company, not just data
scientists or technical experts.

DATAIKU .3
To underscore how AI will become increasingly If this statement is not evidence enough that
essential to businesses in every sector, I thought it the ability to augment and automate business
was best summarized by Mustafa Suleyman, co- processes and business decisions with AI will
founder and CEO of Inflection AI and co-founder increasingly become a core capability of every
of DeepMind (which has since been acquired by business, this might: According to Bank of
Google), in an article for TIME entitled, “How the AI America Global Research and IDC, global
Revolution Will Reshape the World”:1 revenue associated with AI software,
hardware, service, and sales will likely grow

We are about to see the greatest redistribution


at 19% each year, reaching a staggering $900
billion by 2026, compared with $318 billion in We are about to see the
2020.2
of power in history.

Over millennia, humanity has been shaped by


However, to achieve the ideal future state,
greatest redistribution of
power in history.
successive waves of technology. The discovery of companies will require various resources, notably:
fire, the invention of the wheel, the harnessing of • Access to the skills (and software) to build AI-
electricity — all were transformational moments powered systems
for civilization. All were waves of technology that
• Access to the Graphics Processing Units (GPUs)
started small, with a few precarious experiments,
that are needed for the use of AI models,
but eventually they broke across the world. These
waves followed a similar trajectory: breakthrough notably Large Language Models (LLMs)
technologies were invented, delivered huge value,
and so they proliferated, became more effective, Alongside subject matter experts from our trusted
cheaper, more widespread and were absorbed
partners at Deloitte and Snowflake, I’m going to
into the normal, ever-evolving fabric of human life.
share my personal take about AI in 2024: We are
AI is different from previous waves of entering an era of compute cost volatility. GPU
technology because of how it unleashes new costs will be highly variable and difficult to predict
powers and transforms existing power. This and costs will be driven, fundamentally, by supply
is the most underappreciated aspect of the
and demand.
technological revolution now underway. While
all waves of technology create altered power
structures in their wake, none have seen the raw Read on for what this means for the organizations
proliferation of power like the one on its way.” poised to harness AI (including Generative AI) at
scale in the months and years to come.

1
https://time.com/6310115/ai-revolution-reshape-the-world/
2
https://business.bofa.com/en-us/content/economic-impact-of-ai.
html#footnote-2
DATAIKU .4 DATAIKU .5
The GPU
Advantage in
Mastering LLMs
As a technical refresher, GPUs are computer
processors that are designed to accelerate the The parallel processing capabilities and
calculations necessary to render images and computational power of GPUs are instrumental
videos on a display. Originally developed for in the training and deployment of LLMs — they
rendering graphics in video games, GPUs have contribute to faster training times and more
evolved to become highly parallel processors efficient inference.
capable of handling a large number of calculations
simultaneously. So, for organizations aiming to achieve Everyday
AI — building Generative AI as well as traditional
This parallel processing capability makes GPUs ML and advanced analytics into the processes
well-suited for certain types of computations, such and operations throughout their business —
as those involved in machine learning (ML), deep this has implications: The costs associated with
learning, and AI. Before GPUs came on the scene in GPUs will exhibit significant variability and prove
the late 1990s and early 2000s, Central Processing challenging to anticipate, primarily influenced by
Units (CPUs) performed the calculations necessary the fundamental dynamics of supply and demand.
to render graphics. While both GPUs and CPUs
are essential components of modern computing That demand will increase (64% of senior AI
systems, they are optimized for different types of professionals are “Likely” or “Very Likely” to use
tasks. Generative AI for their business over the next year,
according to a Dataiku and Databricks survey),
LLMs are trained on large datasets and can though we don’t know by how much or how
generate human-like text based on the inputs quickly. Supply is uncertain, but will undoubtedly
they receive in a process known as inference. This be influenced by manufacturing capacity within
process is most efficiently done on GPUs, which the semiconductor supply chain (which includes
are adapted for parallel processing (meaning the production of GPUs and is concentrated in a
the architecture of these models allows for the few locations). In turn, manufacturing capacity
simultaneous processing of multiple tokens or may be influenced by global events such as natural
words). disasters or geopolitical events.

DATAIKU .6 DATAIKU .7
#1
Companies will need the ability to analyze and
assess the tradeoffs between cost and quality of
output to strike the most effective balance. This will
require a nuanced understanding of the variables
at play, ensuring that computational resources are

#2
deployed efficiently.

Companies will need the ability to change between

How to Prepare models and service providers to adapt cost posture


in the face of variable costs. They should also

for & Manage the


be sure to use automated scaling solutions to
dynamically adjust computing resources based
on workload demands, which ensures optimal

Cost Variability performance during peak times and minimizes

#3
costs during periods of lower demand.

Many companies are experienced in managing


variable costs of commodities. For example,
companies in energy-intensive industries already As a way to lock in their costs, this shift may drive
do this for energy costs, balancing different energy companies to manage their own GPU clusters,
sources to achieve the right balance between rather than renting them from cloud providers.
availability, price and, increasingly, carbon While this approach introduces the additional
intensity. Companies that ship products do this overhead of maintaining these clusters, it provides
with shipping costs, which are tightly coupled with organizations with greater control over their
Companies will need energy costs. computing infrastructure, potentially offering long-

#4
term cost benefits.
the ability to change Where compute cost volatility differs is companies

between models and that are not used to this kind of work will need to
do it now, as GPU cycles become critical to their
service providers to work. For example, financial services companies or Technologies that optimize the models for

adapt cost posture in pharmaceutical companies don’t usually engage specific use cases will become more popular.
in energy and shipping trading, but they will need Organizations are increasingly seeking solutions

the face of variable to build such practices for GPU compute. This that enhance the efficiency of their GPU utilization,
evolution of compute cost dynamics leads to aligning computational resources with the unique
costs. several consequences: requirements of their applications.

DATAIKU .8 DATAIKU .9
The GPU
Paradigm Shift:
How Dataiku Can Further, the collaborative environment of

Help Dataiku allows data scientists and ML engineers


to experiment with different models and
algorithms, which can lead to more efficient model
development. This can enable organizations
to choose models that provide a good balance
Dataiku — the platform for Everyday AI — between performance and computational
complements organizational strategies for requirements, thus influencing overall compute
managing compute costs in the context of the costs. Next, Dataiku’s visual interface enables
Generative AI boom. With robust orchestration
users to monitor and optimize the performance of
across data science and ML workflows (from
ML models, which includes identifying resource-
data preparation to MLOps), organizations
can optimize resource utilization and reduce intensive tasks and optimizing them for efficiency.

Conclusion
unnecessary compute costs.
With the ability to integrate with leading cloud
platforms, users can take advantage of cloud
The ability to scale horizontally and vertically services with elastic computing capabilities, paying
enables Dataiku to accommodate varying only for the resources they consume. In times of Particularly in the age of Generative AI, teams
workloads and demands. This scalability enables compute cost volatility, the ability to scale up or will continue to generate and accumulate more
organizations to adapt to changing compute down in the cloud can be a cost-effective strategy. data, thus the need for computational power to
requirements without major disruptions, thus process and analyze this data will increase. So,
helping to manage costs effectively. Plus, organizations can capitalize on cloud- while organizations should be mindful of potential
specific features such as auto-scaling to optimize compute cost volatility, it shouldn’t deter them
Next, efficient data management and feature compute costs and ensure seamless deployment from continuing to scale ML and AI initiatives.
engineering are essential for building high- and scalability of machine learning models in
performing ML models. Dataiku provides tools for cloud environments. To build trust in AI projects By adopting a strategic and measured approach
managing and processing data, contributing to and programs, Dataiku enables organizations via (such as with Dataiku), organizations can harness
the creation of optimized datasets that can lead to teamwork and transparency in a single, centralized the benefits of these technologies while managing
more efficient model training, potentially reducing workspace, explainability and safety guardrails, and and optimizing computational resources
overall compute costs. robust AI Governance and oversight. effectively.

DATAIKU .10 DATAIKU .11


An Opportunity
to Build Trust
Oz Karan
Bio
PARTNER, DELOITTE RISK & FINANCIAL
ADVISORY, DELOITTE

Oz is a partner within Deloitte & Touche, LLP's


Risk and Financial Advisory (R&FA) practice and
serves as R&FA's Trustworthy AI leader. He has
more than 20 years of risk management, regulatory
compliance, and financial advisory consulting
services experience with in-depth knowledge of the
banking, payments, and fintech industries.

Oz supports clients as an advisor to the Risk,


Regulatory, and Finance functional leaders on
strategic risk management, regulatory compliance,
and operational and technology modernization.
He routinely advises client leaders around
the possibilities of modern risk & compliance
organizations, enabling the organization's most
strategic priorities with more effective and efficient
processes, governance structure, and technologies.

DATAIKU .13
The age of AI is upon us
The age of AI is upon us with the dawn of a broad So, when faced with a technology solution that
with the dawn of a broad
spectrum of Generative AI capabilities available
to consumers. But much like the technologies
obfuscates the ‘how’ and ‘why’ of its outputs
while introducing questions like “Is my data the spectrum of Generative AI
themselves, trust in these paradigm-shifting product?”, AI users must prioritize establishing and
technologies will not be built in a day. continually proving trustworthiness.
capabilities available to
consumers. But much like
As with prior cutting-edge tech, AI is greeted with At Deloitte, we work to understand the risks,
a healthy skepticism from both organizations and rewards, utility, and trustworthiness of AI so that
consumers seeking assurances on the privacy and we may help clients across industries, government,

the technologies themselves,


security of their data. This skepticism is further and commerce leverage the technology. Deloitte’s
compounded by the black box these solutions can Trustworthy AITM Framework provides a cross-
present to AI operators, owners, and developers. functional approach to help organizations identify
As organizations grapple with their use of AI, the
line between trust in the machine and trust in the
organization blurs.
key decisions and methods needed to maximize
Generative AI’s business potential. trust in these paradigm-
As AI solutions quickly integrate into many facets
We know from Deloitte’s TrustID Generative AI
study of over 500 respondents that consumer trust shifting technologies will not
be built in a day.
of everyday life, consumers and employees won’t decreases by 144% when consumers know a brand
distinguish between trust in an organization and uses AI1, and that their perception of reliability
trust in its use of AI, or its AI output. This will make drops by 157%1. Similarly, employees’ trust in their
AI a strategic operational opportunity and a core employers decreases by 139% when employers
tenet of brand and reputation management. offer ‘AI technologies’ to their workforce1.

The question of “Whom can I trust?” is one we Human trust in AI is an uphill climb. The
ponder when assessing another’s character, intent, organizations that focus on building AI solutions
or motivations. Personal trust is carefully cultivated with trust by design may claim an advantage in the
over time; it requires consistency, familiarity, and marketplace.
prioritization of meeting that individual’s needs.
1 Deloitte’s TrustID Generative AI Analysis. August 2023.

DATAIKU .14 15 DATAIKU


Understanding Trust Erosion
The velocity of AI Possibilities
solutions’ availability
necessitates balancing AI is not infallible. Countless examples in the public Usage of AI and accountability should be driven
innovation with sphere have proved as much from rogue text from the top down, with boards and management

forward-thinking bots to intellectual property infringement to data answering for AI incidents similarly to cyber events
exfiltration. Understanding and accepting that AI or regulatory noncompliance. Management

control mechanisms may fall short of human expectations can help limit that cannot or does not effectively speak to the
the consequences of those failures and formulate organization’s use of AI could risk fostering a
and technological responsible planning and response. If for humans, culture of nonaccountability.

guardrails. seeing is truly believing, it may be an onerous


journey for AI to gain human trust. Both consumers and employees expect

Trustworthy AI organizational transparency when it comes to


Consider this example: Waymo’s joint study with the use of AI, and the use of their data in AI. As
deployment will likely Swiss Re insurance found the driverless rideshare regulators trend toward requiring informed consent

require a rebalancing company’s cars experience 76% fewer accidents from data subjects, organizations will need to both
involving property damage compared to human- comprehensively understand and communicate

of accountabilities and driven cars2. But comparative statistics alone do their AI uses, not simply for regulatory compliance
not shape public perception, and the perceived risk purposes, but to keep the lines of communication
responsibilities across of an AI solution often pales in comparison to its open with crucial stakeholders inside the

the organization, perceived reward for widespread adoption. organization and out.

necessitating The velocity of AI solutions’ availability necessitates


balancing innovation with forward-thinking
cross-functional control mechanisms and technological guardrails.

relationships between Trustworthy AI deployment will likely require a


rebalancing of accountabilities and responsibilities

operations and across the organization, necessitating cross-


functional relationships between operations and 2 Comparative Safety Performance of Autonomous- and Human Drivers: A Real-

technology. technology.
World Case Study of the Waymo One Service

DATAIKU .16 DATAIKU .17


Building AI
Solutions With
Trust by Design
Organizations that wrangle with the challenges of
Across the globe, regulators have identified
establishing trustworthy AI design practices today
common themes and core characteristics for
can set themselves up for success into the future.
organizations to consider as they develop and
deploy AI. In the U.S., the White House established
So, what are trustworthy AI design practices?
an AI Bill of Rights3 highlighting its AI priorities,
“Trustworthy AI” refers to the leading practices
followed by the National Institute of Standards
and ethical standards by which humans design,
and Technology (NIST) providing guidance to
develop, and deploy the technology that makes it
organizations in understanding, assessing, and
possible for AI machines to produce safe, secure,
managing risk with its own Artificial Intelligence
and unbiased outputs. This is different from “trust
Risk Management Framework4.
in AI,” which is a deeper, intrinsic trust between
human and machine.
This early regulatory guidance provides an outline
for proactive organizations to begin constructing
To establish “trust in AI,” organizations must embed
guardrails for responsible AI usage. Most recently at
knowledge, responsibility, and accountability to
the federal level, the Executive Order on the Safe,
uphold organizational values and the trust afforded
Secure, and Trustworthy Development and Use
from consumers, employees, communities, and
of Artificial Intelligence5 seeks to establish safety
other constituencies. Organizations that articulate
testing for specific dual-use foundational models to
and highlight how they secure trust using AI
identify and mitigate national security AI risks.
technology in the right ways may have an easier
path to gaining people’s trust in AI.
The Executive Order also encourages other
regulatory bodies to exercise their existing powers
Those who continue to build and refine today’s
to investigate bias, fairness, privacy, and other
AI platforms must keep trust top of mind as they
potential harms rendered by the use of AI, much
research and develop the next generations of
like some state-level regulations have for specific
Generative AI. Think of it as “trust by design,”
industries.
where AI designers weave the aspiration into the
very fabric of the technology, with all regulatory
With the full weight of regulation still unknown,
expectations for safety, security, and accountability
organizations should continue paying diligent
firmly in mind.
attention to new rules as well as the application of
existing regulations. Though hard to imagine, the 3 National Institute of Standards and Technology AI Risk Management

complexity of today’s AI technologies will likely pale Framework


4 The AI Bill of Rights follows the Executive Order 13960: Promoting the Use of
in comparison to the solutions to come. Trustworthy Artificial Intelligence in the Federal Government (December 2020).

DATAIKU .18 DATAIKU .19


But How to
Inspire Trust?
AI and Generative AI are not
domains which will be won
To further balance AI innovation with adequate
control mechanisms, consider the value of
establishing an AI Governance framework that’s The Rewards of
Trust or lost. By building solutions
aligned to your organization’s corporate values.
Adhering to regulations and aligning with the
organization’s values will help demonstrate
responsible stewardship of AI both to employees
and the public. AI and Generative AI are not domains which will be that incorporate trust into all
won or lost. The question is which organizations
By establishing a greater focus on AI controls,
including third-party risk management,
will win with AI? As we enter the Age of With7, trust
can be the differentiator between success and
stages of AI development and
use, organizations stand to
organizations can build the credibility they crave to failure in this technological revolution.
form a bedrock of trust.
By building solutions that incorporate trust into all

benefit both themselves and


A foundation of trust is slowly built upon stages of AI development and use, organizations
transparency and easily damaged by omission. stand to benefit both themselves and society.
To effectively establish transparency regarding AI Organizations that can continually demonstrate
solutions, organizations can proactively share the
results of safety testing, such as red-team testing
called for in the recent Executive Order5. Explaining
prioritization of their key stakeholders through a
transparent approach to adoption and effective use
of guardrails will continue to gain the trust of the
society.
to core constituencies the types of guardrails public.
employed to guard against potential harms of AI
can help protect consumers6 and establish public Organizations today know AI is an imperative —
trust. those that deem trust of equal criticality will help
realize the benefits of AI for all.
About Deloitte
5 Executive Order on the Safe, Secure, and Trustworthy Development and Use of
Artificial Intelligence (October 2023) Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee, and its network of member firms, each of which is a legally separate and independent entity.
Please see www.deloitte.com/about for a detailed description of the legal structure of Deloitte Touche Tohmatsu Limited and its member firms. Please see www.deloitte.com/us/about for a detailed
6 Stanford University Human-Centered Artificial Intelligence 2023 Foundation 7 Deloitte’s The Age of With™ Exploring the future of artificial description of the legal structure of Deloitte LLP and its subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting.
Model Transparency Index intelligence
This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services.
This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any
action that may affect your business, you should consult a qualified professional advisor. Deloitte shall not be responsible for any loss sustained by any person who relies on this publication.
DATAIKU .20 DATAIKU .21
The Future of
Advanced AI Is
Simple
Ahmad Khan
Bio
HEAD OF AI/ML STRATEGY, SNOWFLAKE

Ahmad is the Head of AI/ML Strategy at Snowflake


where he helps customers optimize their ML
workloads on Snowflake. He also works closely
with the Snowflake product team to help define the
AI feature set within Snowflake based on the voice
of the customer.

Prior to Snowflake, Ahmad spent over four years


at AWS where he focused on the AWS stack of ML
services and was involved in early proof of concepts
for AWS SageMaker. Ahmad holds a Master’s in
Electrical & Computer Engineering from University
of Southern California.

DATAIKU .22 DATAIKU .23


The combination of these
In the age of Generative AI, the easy access to The common approach to advanced processing
two trends means it is critical
for enterprise leaders to
cutting-edge models is quickly setting the stage has been to enable AI teams to gain access to the
for enterprises to realize that the real differentiator enterprise data and make copies of it to use on
is their proprietary data and the models that their own platforms.
are customized or fine-tuned with that data. At
the same time, Generative AI is quickly making
advanced AI accessible beyond highly technical
This approach to move data from its governed
source has caused pain points by creating new
establish a solid foundation
data scientists by making the interface between
humans and digitized information to be natural
silos and proved to be a less than ideal process
for security teams. These teams need to analyze for data and custom models
as well as define a strategy
language rather than code. vulnerabilities as data gets shuffled across a wide
range of compute environments that process data
The combination of these two trends means it is and serve model results.

that focuses on securely


critical for enterprise leaders to establish a solid
foundation for data and custom models as well as A modern approach is to enable the data scientists
define a strategy that focuses on securely delivering and other developers to do their advanced
AI in the form of natural-language-oriented
application interfaces.
processing where the data is already curated and
governed. By reducing data movement, developers
can both iterate faster as part of their development
delivering AI in the form of
Build and Deploy cycle and reduce security and operational
complexities to take projects to production. natural-language-oriented
LLM Apps in Built with data-intensive processing in mind,
application interfaces.
Minutes
Snowflake offers scalable infrastructure and Large
Language Model (LLM) application stack primitives
that enable developers to build apps in just
Similarly to other AI processing such as machine minutes without moving data or creating copies.
learning models used for predictive analytics, This includes Snowflake Cortex, which brings
Generative AI demands large volumes of access to leading LLMs such as Llama 2, as well as
data processing with specialized compute Snowpark Container Services, which supports the
environments. execution of models packaged as containers.

DATAIKU .24 DATAIKU .25


Use AI in Everyday
Analytics Within
Seconds
As part of a comprehensive Generative AI strategy,
data executives must also identify paths to
expand adoption beyond the AI experts and drive
innovation among the analysts and business Getting the most value
teams.
from Generative AI will
require organizations
This could be done by enabling multiple teams
without engineering backgrounds to use LLMs.
This could include analysts using LLMs as part to define a holistic
of familiar SQL functions as well as enabling
business teams to use applications with graphical strategy that first
establishes a robust
user interfaces that keep data and processing in
Snowflake. Snowflake offers first-party applications
with pre-built UIs such as Document AI to help data and model
non-coders search and get answers from PDF
documents. governance and then
Getting the most value from Generative AI will enables developers
require organizations to define a holistic strategy to accelerate LLM app
that first establishes a robust data and model
governance and then enables developers to development and
analysts to leverage
accelerate LLM app development and analysts to
leverage AI as part of everyday analytics. 2023 was a
pivotal year for enterprise AI and we look forward to AI as part of everyday
partnering with more organizations as part of their
AI strategy in the coming year. analytics.
DATAIKU .26 DATAIKU .27
Everyday AI,
Extraordinary People
Dataiku is the platform for Everyday AI, enabling data experts and domain experts to work together to
build data into their daily operations, from advanced analytics to Generative AI. Together, they design,
develop and deploy new AI capabilities, at all scales and in all industries.

Elastic Architecture
Built for the Cloud

Machine Learning Visualization Data Preparation

Name Sex Age

Natural lang. Gender Integer

Braund, Mr. Owen Harris male 22


Moran, Mr. James male 38

Remove rows containing Mr. female


ques Heath female
Keep only rows containing Mr.
y male
Split column on Mr.
Mr. Robert male
Replac
ry e Mr. by ...
D Kingcome)

Remove rows equal to Moran, Mr. James

Keep only rows equal to Moran, Mr. James

Clear cells equal to Moran, Mr. James

Filter on Moran, Mr. James

Filter on Mr.

Toggle row highlight

Show complete value

Governance
DataOps & MLOps Analytic Apps Generative AI

©2024 dataiku | dataiku.com

DATAIKU .28

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy