Jollibee #Chickensad: A Costly It Problem: Forcing 72 of Its Stores To Close

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Jollibee #ChickenSad: A

costly IT problem
Jollibee is losing millions of pesos a day due to an IT problem that forced some of its
stores to close. Here are the possible causes of the problem and the lessons we can
learn from it

August 2014, Jollibee Foods Corporation announced that a major IT system


change it undertook was to blame for the lack of the popular Chickenjoy in
some of its stores. The change affected the fastfood giants inventory and
delivery system, forcing 72 of its stores to close.
The brand has taken a hit: aside from its loyal customers taking their
disappointment to social media, Jollibee has lost 6% of its sales at least for the
first 7 days of August due to the problem. Using Jollibees 2013 revenue, that
amounts to P92 million. This is on top of the P500 million ($11.37 million) that the
company supposedly shelled out for its new IT system. (Editors note: Other
reports say Jollibee stands to lose some P180 million ($4.09 million) in revenues
a day)
I asked some of my friends in the industry about what could have caused Jollibees
costly IT disaster and the lessons we could learn from it. Heres a summary of their
insights and mine.

ISSUES

1. System migration

Jollibee had been using a product from software company Oracle to manage its supply
chain, which includes inventory, placing of orders and delivery of supplies to stores.
Insiders said a dispute with Oracle prompted Jollibee to switch to its rival, SAP.

Now, supply-chain software products arent out-of-the-box that you can just install and
run. These need to be customized in order to fit a companys business processes. The
customization usually takes months, if not over a year, and involves programming and
configuration. Jollibee outsourced this project to a large multinational IT service
provider. Jollibees Oracle system had been running for years, and most certainly, had
huge amount of complex programming and continuous modifications over time. There
must have been fragile interrelationships between these programs and configurations,
making the migration to SAP a huge and risky move.
2. Staffing and expertise

The migration project was outsourced to a large multinational IT service provider, with
no sizable local team handling SAP, according to members of the Philippine SAP
community I was able to interview. My interviewees have never heard of that vendor
taking on Philippine projects using SAP before, which is why they concluded that the
vendor does not have significant SAP expertise locally.

Also, they said there was a flurry of recruiting for SAP professionals for that vendor. It
was a red flag because it seemed the vendor was having trouble filling positions
required for the project. The vendor reportedly brought in people from India and other
countries, but sources said the project remained understaffed.

To assemble a large team of outsiders and have them work on a complicated project
that quickly? Its troublesome. We can assume the outsiders have not worked under a
common methodology and culture. They dont have a common understanding of
standards and processes. It takes a while to learn the ropes.

3. Schedule and size

This is a half-a-billion-peso project, but it has an operating schedule of just a little over a
year from the time the recruitment activity started till the supply chain issue broke out.
Many of the projects Ive seen costing just 5% of this amount had a two-year timetable.
A project of this size will require 3 to 5 years to properly implement from inception to
transition. Maybe this was just the first phase, but unfortunately for Jollibee it was
already costly.

4. Testing

Testing to check if the systems features and processes are working is one of the most
overlooked aspects of IT projects. Unfortunately, most projects do this towards the end.
The later the defects are found, the more expensive they are to fix.

I asked a SAP expert on how testing is done in SAP, and he replied, You'd be
surprised at what passes for unit / functional / integration testing in Oracle and SAP
projects. While the practices and tools for testing have matured over the last two
decades, very few of them are properly applied to most ERP projects like Jollibees,
according to my source. ERP or Enterprise Resource Planning is the software system
for business processes.

RECOMMENDATIONS

1. Start small
The larger the IT project, the greater the chance of failure. This is because its difficult to
accurately predict upfront the requirements, system design, and human interactions
needed in a project. Stakeholders dont really know what they want until they actually
get to use a system. Engineers cant validate their designs until they have built
components to test. And the way engineering teams and business units interact during
the course of a project usually has a huge impact on schedules and deliverables.

Its better to start with a very small project, one that can be done over 6 months, with 5
people or less. The project can be presented quickly to stakeholders and used as input
for succeeding changes or enhancements. Engineers will also be able to test their
designs before any huge construction is done, making changes less costly. Its
important that the initial team include veterans. The team members can then be seed
members of succeeding larger projects or several small projects done in parallel.

2. Testing should be core and automated

An IT project must employ Test-Driven Development, where testing is central. Basically,


this approach means that tests are defined before each piece of work is started. Testing
is done not just by dedicated testers, but by every member of the team. Automated
tests are preferred over manual; rich automated testing tools have emerged over the
last two decades, and many of them are free and open source.

As the system is being built, automated tests should be done on even the smallest units
of the system. Since the tests are automated, they can run multiple times a day, giving
the team instant feedback on defects. This results in high quality work at every step of
the project.

3. Delivery must be continuous

One of the riskiest things I see organizations do time and time again is big migration to
a new system. They have an announcement that says, "System X will go live by (launch
date)!" When that day comes, its invariably a mess. People cant get work done with the
new system and the old system is gone. If theyre lucky, the old system is still around,
while the new system undergoes bug fixing.

Compare this to how Google and Facebook roll out their changes. Notice that your
Gmail and Facebook have new features every few weeks or months. If you dont like a
feature, theres a button that allows you to go back to the old way of doing things. This
button is Googles and Facebooks way of getting feedback from their users. They roll
out the new feature to a set of users. If the users opt for the old feature, then Facebook
and Google know they still need to improve the new feature. Then they roll it out again
to another set of users. When they reach the point when few users opt for the old
feature, then they know theyve gotten the new feature right and make it a permanent
part of their systems.
You can apply this to business systems. Dont roll out your system in a big bang. Roll it
out, feature by feature every few weeks or months to a set of users, and then get
their feedback. It will be easier and safer to roll out small changes rather than large
ones. Even the deployment and rollout can be automated. This will certainly be less
costly for your company.

4. Be transparent

My final piece of advice: Be transparent to your client. Allow your client to monitor the
progress of a project and catch problems earlier rather than later. Provide concrete
evidence, such as:

Regular demos. Provide your client with working


software, not PowerPoint presentations. Let them try
out the features of the software. Get their feedback.
Test reports. Automated tests run multiple times a day
using centralized systems called Continuous
Integration Servers. These systems give clients reports
on various tests, and whether theyve succeeded or
failed. Some of these tests, known as Acceptance
Tests, can be read by non-technical users so youll see
what behavior is being added to the system, and
whether the system already complies with that
behavior.
Quality metrics. Aside from test reports, various tools
can be added to the Continuous Integration Server to
generate other reports. Among these reports are
metrics on quality. For example, in Java, there are
various tools that can check if a system contains a
code that leads to bugs or logic that is too convoluted,
and if a code violates standards.
Big visible charts. If the team works onsite, various
charts can give the rest of the organization an idea of
the progress of the team. Two of the popular charts
are Task Boards and Burndown Charts.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy