A Culture of AI: Tyson Foods' Lee Slezak On Scaling New Tech Across the Enterprise

Consumer Goods Technology Logo
a close up of a clock

When Lee Slezak, Tyson Foods VP of IT architecture emerging technologies in analytics, first joined the food company four years ago, AI was typically referring to AVN Influenza, not artificial intelligence. Times, of course, have quickly changed, and now Tyson is knee-deep in a digital transformation journey that’s leveraging AI and machine learning to drive its business forward.

Slezak recently joined a CGT webinar sharing insight into Tyson’s AI culture, the backbone of its analytics practice, how the company is attracting and nurturing data science talent, and the unsung hero of what makes this all work. Read on for details into its technology vetting process and what Slezak's team is learning as they scale this across the enterprise — including why you can have too much of a good thing when it comes to tech success.  

Alarice Rajagopal: Good morning, and welcome to our webinar, “Learn How Tyson Foods Appetite for Data Is Customer-Driven.” My name is Alarice Rajagopal, and I am senior editor for CGT. I'll be your moderator for today.

Did you know that about 70% of the typical issues that prevent companies from unlocking their full advanced analytics potential are due to a missing link between business and data science? That data and analytics at scale can generate a 5-10% uplift in revenue? Or that brands like Uber have been able to realize AI potential?

 

graphical user interface

To help us discuss these points in more detail, I'm delighted to introduce our subject matter experts for today. Our first speaker will be Lee Slezak, vice president of IT architecture emerging technologies in analytics at Tyson Foods. He is responsible for the definition and governance of the technology stacks used within the company, the forward-looking adoption of emerging technologies, and the delivery of key insights through the power of data, AI and analytics.

Prior to Tyson, Lee spent 26 years at HP and most recently, Hewlett Packard Enterprise. During this time, he played a number of different leadership and technology roles across manufacturing, engineering, innovation, and IT. You will find that Lee has a strong passion for technology and focuses on sound connected architectural approaches that enrich the end-user experience. Lee has traveled extensively throughout the world with considerable time spent in Singapore, Japan, Malaysia, India, and China.

Lee, will be joined by Srinivasa Gopal Sugavanam, VP and data analytics practice head at Infosys. Gopal leads the data analytics business at Infosys North America, collaborating with clients in retail, CG logistics in manufacturing segments, and accelerating their transformation journeys through the adoption of cloud data and AI.

He leveraged his three distinct horizons developed by Infosys to help clients make better decisions as a data-driven enterprise through re-emerging business, as a digital native enterprise, and finally leveraging data as the new capital to take part in the data economy. In the last two decades with Infosys, Gopal has played diverse leadership roles across delivery centers in India, Nordics region in Europe and North America, incubating, building and scaling practices, forging long and trusted relationships with global clients and cultivating a strong ecosystem of partners.

As you can see, we have two very qualified thought leaders on this topic. Lee, and Gopal, thank you both for joining us today. With that, I'd like to go ahead and hand things over to get things started.

Lee Slezak: Thank you very much and thank you for having me here. I'm looking forward to the discussion with Gopal. As mentioned, I lead an organization called Insights-as-a-Service and I'm accountable for the technology strategy around IT architecture, our investments and approach towards emerging technologies in their adoption, as well as analytics. Analytics seems to string the thread through all three of these needles to drive a lot of what we do. Today, we'll be talking in-depth about our company, we’ll go into some of the work that I'm accountable for at Tyson, and then we'll jump into some of the other discussions.

Let me start with a bit about Tyson Foods. Many may know that Tyson is one of the largest protein companies in the United States. We're largely known for our poultry business, but what I would tell you is that we have a substantial investment in a number of other businesses. We were founded in 1935 and continue to grow and strive across the consumer products business. To give a little bit of numbers around the scale of the company, our goal is to sustainably feed the world and to build the fastest growing protein brands that are on the planet today. Aside from Tyson Foods, many don't realize that we also have a lot of consumer brands, including: Jimmy Dean Sausage, Ball Park Franks, Hillshire Farm, Hillshire Snacking, State Fair Corn Dogs, Wright Brand Bacon and Adele Sausages.

To look at the scale we operate: 155,000 head of cattle process per week, 461,000 head of pork process per week, 45 million chickens processed per week, and then 74 million pounds of preprepared foods, such as the brands that I mentioned. For Tyson, it's a very scaled operation. When we look at the technology that supports the scale behind it, it is an immense challenge. We focus not only on how we run the business, but then how we scale to meet the capacities that we need to meet. Then how do we look forward to leverage things like technology around AI and ML and analytics to even drive us further.

As mentioned, I have the architecture portion of the IT group, and that's focused on three key areas. First and foremost, to define the technology stack. We started our journey to transform and modernize the technology at Tyson about four years ago. When we began, we were a very heavy on-prem, ERP-focused organization where we looked for ways to extend our ERP platform to take on capabilities that it was not designed to do.

One of the first things we did four years ago was put significant governance in place around the technology stack. What would be in the technology stack, how is it governed, and how do we ensure the capabilities put in place are aligned to key business strategies? It's been an incredible journey. We're very focused on a cloud-first approach, we’re heavy SaaS wherever we can, and we made a good deal of progress over the last four years.

The second piece that I'm accountable for is emerging technologies, think of this really as the R&D function for the IT organization. This organization is focused on not only what's new to Tyson, but what's new to the industry that we can adopt and apply towards business challenges to enable things that wouldn't be possible using the current technologies in the landscape today. This team focuses on prototyping and proof of concepts, and then scaling those into solutions that are running in the facilities.

“Any place where we can get a competitive advantage, where we can't find capability in the marketplace, we'll employ the internal development teams, work with partners, and introduce new technologies that are cloud-native.”
Lee Slezak, Tyson

The other piece of this group is to evangelize and educate the company on how to use new and emerging technologies, whether that's computer vision or advanced analytics with prediction just across the board, robotics automation, etc.

Lastly, probably the biggest part, has been focused around the analytics teams. When I say analytics teams, it starts with ingestion of what I'm accountable for all the way through modeling, presentation and visualization. Then, we have a dedicated data science team that's focused on putting in place modern forecasting tools, computer vision solutions, and other AI and ML solutions to drive business forward.

Data over the last 18 months has become the hottest commodity within the company, the services, and the solutions that we provide. As COVID hit data became key. We mobilized teams quickly and put in place descriptive analytics to show us what was happening yesterday and today, but also employed advanced forecasting techniques to look at everything from the spread of the virus, to how to manage vaccinations, and the impacts to the facilities based on the challenges that it posed across the world. While that may have been a launchpad to go further into data, it's fundamental to the work that we do and the way that we deliver capabilities and services.

Speaking about our vision of technology, let’s talk about how Tyson thinks about technology and where it's driving as we move forward. First and foremost, standard and global platforms are key. Four years ago when we started we were completely on-prem. There were multiple data centers across the U.S. and across the world. All of the applications were custom-siloed and difficult to integrate withan island in many cases.

One of the first things we did on the transformation journey was shift to cloud-native solutions and SaaS solutions. Any place where we can get a competitive advantage, where we can't find capability in the marketplace, we'll employ the internal development teams, work with partners, and introduce new technologies that are cloud-native.

Other things that don't provide a competitive advantage, we look to SaaS anywhere we canit reduces the run rate and drives efficiency because we have teams focused on transformation, rather than run and maintain. Another key aspect of having standard and global platforms is to focus the enterprise on moving towards enterprise scale and a single uniform set of systems and data. Whether it's master data or transactional data, it’s harmonized to take advantage as we move up the stack into more insightful analytics and applications.

The second piece, or the pillar of our strategy, is focused on advanced analytics. We've put in place a number of solutions that leverage computer vision, forecasting and other AI and ML solutions to drive outcomes. The challenge that we've had is focusing on that scaling aspect. It's far less of a challenge to build out the right model to drive us forward, but far more of a challenge to actually get the right scale in place to adopt it and drive value.

The backbone of the analytics practice is the data lake we've created. We partnered with different cloud providers and locked in on a significantly-sized data lake where all the data is harmonized. Then, that data lake is the basis for all the visualizations and solutions we built. We believe that by setting up the standard and global platforms, that enables the predictive and prescriptive analytics we're driving towards, which in turn will allow us to move forward into further automation and robotics.

The protein business and processing of protein is a highly manual set of processes. One of the goals we put in place and made great strides towards, is leveraging newer technologies, such as IoT and cognitive systems to drive next-generation automation and robotics. We look at solutions on the edge to drive data aggregations and position us to make as many decisions as possible, as close to the production plants as we can. Then, aggregate that data to drive the analytics that are important and powerful to the company.

Tyson has a dedicated data science team that's focused on putting in place modern forecasting tools, computer vision solutions, and other AI and ML solutions to drive business forward.
Tyson has a dedicated data science team that's focused on putting in place modern forecasting tools, computer vision solutions, and other AI and ML solutions to drive business forward.

Srinivasa Gopal Sugavanam: Very fascinating, Lee. The data is changing and today, data-led transformation is the pivotal point for a lot of the digital transformation that we see happening. I’d like to unpack this journey, focusing on the experiences of driving a traditional organization like Tyson Foods to become an AI-powered enterprise. There are typically three dimensions that organizations that are driving a lot of data and AI initiatives pursue.

1. Embracing data. You’ve walked through a lot of details of how to establish a data lake and bring all the data together. 2. Following agile practices that drive velocity. How do you bring about change to support the business?

3. The culture of AI. Let me begin by asking, how have you been demystifying AI within Tyson. How do you evangelize AI with the business teams?

Slezak: It's been quite a challenge. We are very much a born-analog company. Given the age and industry that we're in when I joined Tyson four years ago and heard the term AI. it meant AVN Influenza. It didn't focus on artificial intelligence. It was a different mindset based on the history of how we got where we are. One of the things we spent time on early on was evangelization, communication, presentationwhether it was roadshows at key facilities to show the possible or small proof of concepts. We did a lot of proof of concepts to show that this stuff is real and possible.

Trying to connect the dots between what people know in their day-to-day liveswhether on their phones or tools they use personallyand showing the connection of what we could do to leverage similar technologies to drive the enterprise forward. It's been a lot of communication, a lot of education, and a lot of time with the key stakeholders to show what's possible, but follow that up with demonstrating it. It's one thing to say it's possible, but it's far more important to demonstrate that it can be done.

Sugavanam: Let me follow up on what you said. Have you been successful in establishing AI as a fundamental capability? Or is there still a need to justify investments that the organization makes in AI initiatives across the enterprise?

Slezak: There's always going to be a need to justify all that we do because we run a pretty lean shop, both on the manufacturing side, as well as within the technology space. Everything is focused on business outcomes.

Like many other companies, we strive to have business-led transformations that are IT-enabled. We continue to show the value to demonstrate what's possible and how to move things forward. We found it's almost a flywheel effect because one of the first projects we did had gained some notoriety within the company, and eventually we were able to scale it across multiple plants within the poultry business — it was a computer vision system that helped automate the inventory process. In the past, we had very manual processes that relied on people, which were error prone in terms of the way the processes were designed.

We put in vision systems to identify what product was read by various devices, such as scales and other pieces of equipment, and then automate the update of inventory within the storage facilities. Once we were able to demonstrate this, then we scaled it across. Right now, we have that across eight different poultry plants. Once we were able to scale that, a lot of eyes opened up.

However, sometimes you can be the victim of your own success because we ran into a situation where the stakeholders knew, saw and could believe in computer vision. We started to run into a lot of requests that weren't as appropriate for computer vision as they might've been for traditional analytics or even forecasting situations. We always look to partner with key stakeholders across the business, making sure that we're solving the right problems and not just applying technology for technology's sake.

Sugavanam: That is what I've heard some people say about the sledgehammer effect. They believe AI is the hammer that can solve any problem and they go looking for nails. Lee, how do you prioritize and sequence AI initiatives within the enterprise? Are there any best practices you can share?

“Finding a data scientist is going to be more and more difficult given the demand for these skills.”
Srinivasa Gopal Sugavanam, Infosys

Slezak: When we started, it was traditional whoever we could get access to, whoever had the loudest voice, whoever was willing to provide some funding to see the effort, really drove how we prioritized. As we matured, and this is true with the work in emerging technologies, data science and AI/ML space, we've put together a governance program where there is a steering committee made up of technology leaders within the CTO staff, who look at the requests coming in, look at the workload of the data science team or whoever’s focused on it, we vet the requests out.

We have adopted some best practices around whether it's the AWS working backwards things like press releases, FAQs, and those kinds of tools that Amazon perfected. We employ a lot of that. We look across a series of metrics for each project that comes in. The key one where we're focused on, before we take on something that we believe is going to scale, is if we have the right capabilities to actually scale it. If we think that we picked a winner in terms of the request and the technology we want to put in place, we look at how we scale it. How do we get the biggest bang for the buck that we possibly can?

Frankly, whether it's working through the culture of people not understanding or not believing that the technology can do what it can do, or our ability to drive value at scale, those are probably the tougher things to work through, rather than the technology itself.

Sugavanam: Again, I'm bringing you closer to your own domain. Have you had any experiences deploying AI in IT operations and have you seen some success with those initiatives?

Slezak: As many people know, we've partnered with Emphasis to be our managed service provider. Through that partnership, Emphasis has brought in a series of tools that we've deployed and driven value, including looking at incoming tickets and operations to optimize where we focus and drive investments or other things that we've done on our own. Maybe slightly away from IT ops, but it is very related to the topic of talent. Like anyone else who's trying to source talent in the technology industry, it's a crunch out there right now.

Everybody wants this technology. We employed some forecasting tools and other things to identify where we have talent risks, where we should be investing, and how to drive the best employee experience across the board? We have employed AI across our operations. I would tell you though, most of where we focus our AI investments is around the business use cases. For us to get the biggest return on investment, we look for those big problems, the very large problems that we can solve in the businesses we run.

Sugavanam: Is there a bias towards picking up problems that drive revenue versus problems that are largely helping you address the cost equation? If you were to prioritize between, let's say two competing projects, would there be a bias towards driving revenue or implementing efficiencies and saving costs?

Slezak: We started by driving efficiencies and cost savings because that's the easiest one to justify and sell across the organizations that we support. Whether it was the inventory use case that I mentioned, or other things that we focused on in terms of safety solutions or quality and grading solutions for products, things like that. As we move forward, we started to focus on more traditional forecasting.

We started in probably the most challenging part of ML and AI, around computer vision, and now we're driving towards forecasting — supply forecasting, demand forecasting, demand sensing — those are key areas where we're investing considerable resources and human capital to come up with the best insights that we can have to drive the company forward.

Sugavanam: The key challenge these days seems to be around productivity of data science teams because it is quite obvious from the way you described AI, something that's well-established within your enterprise. What are some of the actions that you're taking to enable your data science community to be more productive and how are you addressing the talent part of the equation? Finding a data scientist is going to be more and more difficult given the demand for these skills. Are there any initiatives that you're taking to convert your IT talent to becoming data science professionals?

Slezak: It is a real challenge, through partners or work that we've done on our own. All of us across the industry are working through the extreme demand of the talent that we need in this space. We're looking at a number of different aspects to retain our talent, such as ensuring that we have the best capabilities and tools for our team members to work with. Also, spending a lot of time recognizing the work that they're doing and putting them in a position to have the right visibility to grow in their careers, as well as to add value for the company.

In terms of our strategy around bringing in early career team members and others that we can grow into the space, that's really our core strategy because given the demand in the marketplace, it's more cost-effective. It’s a better experience to bring in the right talent early in their careers and develop them towards the approach that we want to take so that we can yield the values we need.

“The question on scaling is an interesting one because if you look across the industry, my experience has been that most of the AI and ML projects don't make it through the other end.”
Lee Slezak, Tyson

In addition to that, we partner with cloud companies or managed service providers. I look at an by-all-means-necessary approach to staffing, keeping the team growing, and striving. It's not just let's go hire a captive team or a dedicated team to go work, it's let's bring in our people, let's grow our people and let's partner with the best people on the planet so that we can get the best results on the planet.

Sugavanam: Is there a preference towards which activities you do yourself, versus what support they expect from a partner? For example, defining use cases versus building models, whether it's operationalizing them or maintaining them. Do you have a certain approach that you follow or what do you do yourself versus what you outsource?

Slezak: We look at it as if we've got specific business knowledge that is required to build a model, which is often the case, depending on the use case. If we're in a forecasting project where we need to look at what's going to be over the next few horizons, we can partner with other partners who've done these kinds of work in the CPG industry or in the technology industry. If we're looking at something specific to our business, we tend to do that in-house where we've got specific subject matter expertise and the right resources to do that work.

The question on scaling is an interesting one because if you look across the industry, my experience has been that most of the AI and ML projects don't make it through the other end.

One thing we learned across our journey is the whole scaling aspect. The culture side of it is one thing, to get the adoption, to get the support, but once you cross that bridge and have this adoption and demand coming in for this type of a solution set — being able to scale that beyond a proof of concept, making sure that we're not doing things on our desktops, that we're leveraging cloud solutions and cloud capabilities to truly scale across the enterprise — that's the biggest challenge we have.

Sugavanam: That is what a lot of folks hear on how you have deployed because we've seen a lot of successful AI initiators in consumer facing areas, but then doing AI at scale for the enterprise for your own business teams — you unpacked this well for the audience.

Rajagopal: Thank you, Lee and Gopal, for all the insights you've shared, especially in sharing your journey. You're not alone in a lot of the challenges you're facing. Although some of it might seem basic in nature, building that data foundation is much easier said than done. I appreciate you sharing with the audience.

Lee, did you have a situation where there are multiple databases and analytics tool sets across the business organizations? I know you touched on a little bit of that singular data lake. What would you recommend to other CPGs on how to consolidate all of that data into a single database or data lake? Can you talk about the experience your team had or maybe still has with this issue?

Slezak: We're no different than anybody else out there who struggles with the sprawl of data and systems across the board. You saw the strategy we put in place to drive towards standard and global systems. Yes, we've absolutely put in place a lake. In fact, we've taken it a step further and leveraged a number of open source solutions to build out, not just a an ETL set of processes, but more of the modern ELT where we're extracting data from the source systems, bringing them in, then dumping the load into the lake and driving the transformation from within the lake. Is there a secret sauce on how we do that? Honestly, it's persistence, hard work and a lot of focus on the data.

The biggest challenge we've seen across the lake, is that it's not the underlying technology we employed. It scales. Anytime that you're in a large cloud-based data warehouse or lake solution, it's elastic, it's going to scale for you. The challenge we've seen in where we focus our energy in the modeling of that data. That intersection of the technologists with the business subject matter experts to understand what data needs to be grouped together so that we can get the most value out of it.

Short answer to your question: It's persistence, it's planning, and it's having the right prioritization in place to tackle the first data subjects well. Don’t forget the master data. Master data will make or break you every day of the week.

This has been a challenge for Tyson. We've turned the corner, we're working closely with partners and internal solutions to drive value within our master data. But it's the unsung hero of what makes all of this work is the data that glues it all together.

Rajagopal: Gopal, anything you want to add?

“You will find the brightest minds come in and solve the problem, but then they're also going to go out and look for the next big thing to solve. Retaining that knowledge or retaining some of those individuals on the team is key.”
Srinivasa Gopal Sugavanam, Infosys

Sugavanam: I'll underline what Lee said, it's persistence. Focus on data and making data trustworthy. A lot of times people focus on the engineering aspects of bringing data into a lake, but they ignore the part about organizing it for democratization and consumption.

Unless you will keep a focus on consumption, you're probably going to be creating a data swamp. You will be bringing a lot of data together, but it's not going to be put to use.

The second thing that Lee mentioned is master data and helping build trust on data. We've seen a lot of initiatives that we undertake around data governance that help build around data quality that help build trust on data. Once you created that trust, then you will have already established the pool. It's all about what the velocity is at which you could bring in data and organize it for consumption. Those are some of the inputs that I would share.

Rajagopal: Lee, with an all-means-necessary approach to accessing AI talent, what have you learned about managing successful hybrid teams of internal and external providers?

Slezak: Whether it's in the AI space or across information technology in general, I would be surprised to find a company at scale that isn’t working in a hybrid environment with partners. It's the fundamentals: Do you have the scope right? Do you understand the problem that you're trying to solve? Is it clear to the team members what outcomes we're trying to achieve? Those are key. The third thing to look at is, how do you assign the right value to the work to get the right prioritization, and can you get the right output with the right support?

Again, there's not a silver bullet here. We've looked at and operated a couple of different models, whether it's traditional staff augmentation or integrating a partner or team member into our teams, we've been successful with that.

We've brought in entire pods of team members to allow them to drive end-to-end capabilities on our behalf with governance. Then, in other pieces we simply took specific sets of use cases and worked with the partner to say, “You go build this model and solve this problem,then we'll integrate it into our ecosystem, scale it, and drive value.

Again, this is no different than any other aspect of partnering with the exception of the outcome has to be clear. The value has to be clear. The outcome can't just be a white paper on the best model you've created, how accurate it is, or how elegant it may be. The outcome has to be the key — it has to be clear across the team, whether they're internal or external.

Sugavanam: Lee, you've described different engagement models — a very similar experience that I've seen. Looking at the life cycle, there’s problem finding, problem framing, problem solving at scale, and then how to sustain and maintain this. The first one is led by enterprises, business teams, IT teams, and supported by a partner. You can be a capitalist, but then it'll have to be driven by the client.

The second area is where we partner and that's where we help scale and do this quickly, bring in that speed when you're moving from, let's say, a PLC approach to something that started to be deployed at scale. That's where a partner takes a larger role than the client, to actually support and govern.

The last part is putting in place ways to support this on an ongoing basis. If it's done very efficiently or in a managed services model. Increasingly, there are requests from clients to ensure that knowledge is retained within the organization because this capability is now very strategic. Anything to do with data and AI is really strategic. We want to ensure that we are leaving behind enough knowledge or whatever we're working on. I found this to be very successful, where there is deep involvement from client representatives as well as from partner resources.

One of the other things that Lee mentioned is recognizing and retaining key talent. You will find the brightest minds come in and solve the problem, but then they're also going to go out and look for the next big thing to solve. Retaining that knowledge or retaining some of those individuals on the team is key.

Recognizing who those individuals are and finding a way to make their journey interesting and careers interesting in this setup, either being part of a partner team or a client team is very important. That's going to make a big difference in how some of these initiatives can scale.

Rajagopal: Great. Thank you both. The next question, I want to move to still focus on people. It says, does your organization believe in dedicated data scientists or empowering business users to self-serve? And it didn't direct it to anyone. Who wants to go first?

“At Tyson, if you look at just the specific data science area, every now and then you'll find a pocket of resources that understand what the power of data science is and the AI and ML capabilities that can enable that. But for the most part where we're at in our journey is early.”
Lee Slezak, Tyson

Slezak: It depends on the domain. If we're looking at descriptive analytics where we can drive self-serve with governance, not just open-ended self-serve, it's an enabler. Most of the time, if you look across the technology resources, we're anywhere between 150-200% oversubscribed, if you look at the supply of resources versus the demand.

At Tyson, if you look at just the specific data science area, every now and then you'll find a pocket of resources that understand what the power of data science is and the AI and ML capabilities that can enable that. But for the most part where we're at in our journey is early.

And so we have a dedicated team that we've stood up in a couple of different locations, we have a couple of teams actually. And that's really been our approach because I think it's important as you're starting your journey, that you get some quick wins, that you get some early credibility to allow you to have the capital to move forward. And for us, the best decision that we could have made was to dedicate resources in that space so that we could grow the approach and be successful, and then look over time to expand that out to others.

Sugavanam: Brilliant. Again, I think Lee answered this very well. And again, this is always a question of where do you dedicate effort, versus where are you going to create your COE? What I've seen some clients do is build the first project with dedicated resources, show the success, and then the steam then becomes the COE, which enables other teams to replicate.

Again, I've seen organizations which are global, so you do those in one region and then that becomes the template for other regions to follow. You could have partner resources, supplement some of the dedicated data science resources that you may have had in the first project. That could be another way on how you scale, but it's a hybrid approach.

Usually, it's the first success that really matters. Getting to that, I will show them that there is value that's been realized, and it is very important for us to sustain these initiatives.

Rajagopal: Of course, with AI, there's a question of ethics. While automation can bring productivity and accuracy, it also raises the question of ethics sometimes. For example, if HR is using profile screening, while a machine might do a good job in selecting profiles, most likely to be successful in interviews and later in career, the fact that it is based on past data may introduce biases.

How do you plan to address the ethical AI question? Maybe not just in that example, but just in general.

Slezak: That's not a practice that we employ in terms of using AI solutions for screening team members — that's not something we've done. It may be something we look at in the future, but not something that we've done. In many cases, what we find is it that old adage: All models are wrong, but some are interesting, and you have to know that going in. You have to know that it's like a weather forecast.

In many cases you're relying on the statistical model and the algorithms that have been developed to get you most of the way there. At some point, based on either the use case or the scenario, you may need a human in the middle of that to really be the gatekeeper.

Whether it's in forecasting work that we've done or even some of the ability to leverage people, to train models as they go, so that you're less reliant on all of that historical data as you scale and move forward. Take advantage of new data as it comes in. Constantly training, constantly having eyes on the solution. It is definitely something we think about. Most of the use cases where we've focused our energy have been closer to things like product quality or safety aspects, inspections, forecasting. We've done a little bit around some of the people's data, just to understand where we may have some risks in terms of talent, but most of our efforts have not been focused in that space.

Sugavanam: The two areas where the question of ethics really comes up:

1. When you're measuring productivity of your teams, the workforce, and in fact, these are team members working on IT projects.

2. When you're handling BIA information, especially on consumer-facing solutions that you have.

The key is governance, very strong governance. Understanding to what extent does technology get to be leveraged and ready to bring in human intervention to make those decisions. Governance is important. Along with ethical, other concepts that I've always heard as explainable AI, saying, Do people understand what this model is going to do and how are some of these insights and positions being hyped up? Making it explainable helps build trust and drive adoptions. It's a very important aspect for us to remember when driving these initiatives in the enterprise.

Rajagopal: Thank you both. I have a question coming in around vision and initiatives in terms of next steps. Sort of where do you go from here? But I want to add a little bit more detail and just ask, even when you're thinking about next steps, how do you know what to prioritize? Where do you go next?

If someone brings something to you and says, Hey, I think we can accomplish X, Y, Z with this technology.” What is the next step? Or how do you know where to focus?

Slezak: No, it's a great question. I mentioned a little bit early on about how we do have some processes around some of the resources that we leveraged for emerging technologies and data science to try and make sure we work on the right things. But what I would tell you is that we also leverage a similar vetting process just across the board for technology projects at Tyson. We have a steering committee that's made up of key business stakeholders that really enable us to help prioritize. And the way we did it was we worked with our business partners and key stakeholders to define the scoring process for projects rather than the technology team trying to decide, what's the next big win for us?

When requests come in, they go through a series of analysis and vetting. Part of that is scoring with business impact in mind: How does it apply to company strategy? What's the cost? What's the return? The standard metrics that are expected, but we meet on a regular basis and look across all of the requests that come in. Based on those scores, we draw a cutline and negotiate. That cutline is representative of capacity and how far what is feasible based on the resources and the capabilities we have. That's how we manage priority – deciding to work on the right things. Again, for us to be successful, it has to be a business-led transformation or business-led initiative that we enable. It’s very rare for a technology group to try to drive an initiative based solely around technology. In my experience, it has been rare to get the right value out of that or even be successful.

Rajagopal: Thanks, Lee. Gopal, working with your clients, what are you seeing? How do they know where to focus next?

Sugavanam: Like Lee said, it’s about knowing the impact to business and the effort required for us to build these out. We want to pick things which are not only desirable, but also feasible and viable. I've seen plans use the design thinking framework of desirability, feasibility, and liability. Of course, hard numbers matter.

At the end of it, it has to be measured. If the measurements hold it up to the effort that’s been invested in time, those initiatives catch up. That would be an approach of focusing on initiatives and driving costs down, or better efficiencies. When clients start with revenue generating initiatives, that's where they want to see success — usually, around sales, marketing and pricing — before they move on to operations and supply chain. It's a preference of where the company's priorities are, knowing what objectives you're applying to, and this is important.

“As an enabler for these initiatives, you should be looking and thinking of scale because once you've made this successful, how do you then cater to all of this backlog that's going to start coming your way?”
Srinivasa Gopal Sugavanam, Infosys

Rajagopal: Thank you. Before we go, I always try to leave the audience with a tip or best practices. Lee, quickly, what would you say to leave the audience with today?

Slezak: Whether you're looking at AI or any technology initiative, start small. Start with a well-defined scope, but think big. Think about how you could leverage the technology and capability across the key challenges that the organization has seen.

Then probably, equally as important, move fast. The half-life of these initiatives, especially without full organizational support or acceptance, is short. Think big picture, look for that small piece of scope that you can attack to demonstrate the capability of what's possible, then move as fast as you can to get it socialized, scaled, and deployed.

Rajagopal: I'm writing that down: Start small, think big, move fast.

Sugavanam: This has been very insightful, Lee, thanks for spending the time. We've unpacked a few things here. The three key takeaways:

There has always been an approach towards how we evangelize and demystify AI. It’s education, getting people to know what it is and what it does for you or your role. The more evangelization and upfront education that you do is going to reduce barriers and increase adoption.

We've spoken a lot about prioritization, and there are quite a few takeaways on what Lee described as experience.

Lastly, how we scale, both from a technology perspective as well as from a talent perspective.

There are very interesting inputs that came out on both clients as well as partners. It's the same challenge that we are addressing as well, there is so much demand for this talent in the market. We're looking at interesting ways of how to scale. Focus on these three priorities, taking business along, prioritize, and focus on the right initiatives — that's going to drive value for you and look at scale. As an enabler for these initiatives, you should be looking and thinking of scale because once you've made this successful, how do you then cater to all of this backlog that's going to start coming your way?

Rajagopal: Thank you both. I’d like to thank our speakers, Lee, and Gopal for giving us their subject matter expertise today. I'd also like to thank Infosys for sponsoring today's webinar. Finally, thank you to all of our attendees for devoting some of your very valuable time to be with us today. I hope you found it worthwhile. Enjoy the rest of your day.

X
This ad will auto-close in 10 seconds