Extreme Innovation With AI: Stanley Black & Decker’s Mark Maybury

extreme-innovation-with-ai:-stanley-black-&-decker’s-mark-maybury

Extreme Innovation With AI: Stanley Black & Decker’s Mark Maybury

Me, Myself, and AI

Me, Myself & AI | A Podcast on Artificial Intelligence in Business

Why do only 10% of companies succeed with AI? We’re on a mission to figure it out. On Me, Myself, and AI, you’ll meet the people who are achieving big wins with AI. This season we’re talking to leaders from Stanley Black & Decker, LinkedIn, eBay, and more about their experiences with human/machine collaboration.

Listen to episode. 

Stanley Black & Decker is best known as the manufacturer of tools for home improvement projects, but it also makes products the average consumer seldom notices, like fasteners to keep car parts secure and the electronic doors typically used at retail stores. Me, Myself, and AI podcast hosts Sam Ransbotham and Shervin Khodabandeh sat down with Mark Maybury, the company’s first chief technology officer, to learn how artificial intelligence factors into this 179-year-old brand’s story.

Transcript

Sam Ransbotham: AI applications involve many different levels of risk. Learn how Stanley Black & Decker considers its AI risk portfolio across its business when we talk with the company’s first chief technology officer, Mark Maybury.

Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.

Sam Ransbotham: Today we’re talking with Mark Maybury, Stanley Black & Decker’s first chief technology officer. Mark, thanks for joining us. Welcome.

Mark Maybury: Thank you very much for having me, Sam.

Sam Ransbotham: Why don’t we start with your current role. You’re the first chief technology officer at Stanley Black & Decker. What does that mean?

Mark Maybury: Well, back in 2017, I was really delighted to be invited by our chief executive officer, Jim Loree, to really lead the extreme innovation enterprise across Stanley Black & Decker. So I get involved in everything from new ventures to accelerating new companies, to fostering innovation within our businesses and just in general being the champion of extreme innovation across the company.

Sam Ransbotham: You didn’t start off as a CTO of Black & Decker. Tell us a bit about how you ended up there.

Mark Maybury: If you look at my history — you know, “How did you get interested in AI?” — AI started when … literally, I was 13 years old. I vividly remember this; it’s one of those poignant memories: [In] 1977, I saw Star Wars, and I remember walking out of that movie being inspired by the conversational robots — R2-D2, C-3PO — and the artificial intelligence between the human and the machine. And I didn’t know it at the time, but I was fascinated by augmented intelligence and by ambient intelligence. They had these machines that were smart and these robots that were smart. And then that transitioned into a love of actually understanding the human mind.

In college, I studied with a number of neuropsychologists as a Fenwick scholar at Holy Cross, working also with some Boston University faculty, and we built a system to diagnose brain disorders in 1986. It’s a long time ago, but that introduced me into Bayesian reasoning and so on. And then, when I initiated my career, I was trained really globally, so I studied in Venezuela as a high school student; as an undergraduate, I spent eight months in Italy learning Italian; and then I went to England and Cambridge and I learned English.

Sam Ransbotham: The real English.

Mark Maybury: The real English. [Laughs.]

Sam Ransbotham: C-3PO would be proud.

Mark Maybury: C-3PO, exactly! … Indeed, my master’s was in speech and language processing. Sorry, you can’t make this up. I worked with Karen Spark Jones, a professor there who was one of the great godmothers of computational linguistics. But then I transitioned back to becoming an Air Force officer, and right away, I got interested in security: national security, computer security, AI security. I didn’t know it at the time, but we were developing knowledge-based software development, and we’d think about “How do we make sure the software is secure?”

Fast-forward to 20 years later. I was asked to lead a federal laboratory, the National Cybersecurity Federally Funded Laboratory at Mitre, supporting NIST [the National Institute of Standards and Technology]. I had come up the ranks as an AI person applying AI to a whole bunch of domains, including computer security — building insider threat-detection modules, building penetration testing, automated agents, doing a lot of machine learning of malware — working with some really great scientists at Mitre, in the federal government, and beyond, [in] agencies and in commercial companies.

And so that really transformed my mind in terms of how do we … for example, I’ll never forget working together with some scientists on the first ability to secure medicine pumps that are the most frequently used device in a hospital. And so that’s the kind of foundation of security thinking and risk management that comes through. I got to work with the great Donna Dodson at NIST and other great leaders. And so those really were foundational theoretical and practical underpinnings that shaped my thinking in security.

Sam Ransbotham: But doesn’t it drive you crazy, then, that so much the world has this “build it and then secure it later” approach? I feel like that’s pervasive in software in general, but certainly around artificial intelligence applications. It’s always the features first and secure it later. Doesn’t it drive you insane? How can we change that?

Mark Maybury: There are methods and good practices, best practices, for building resilience into systems, and it turns out that resilience can be achieved in a whole variety of ways. For example, we mentioned diversity. That’s just one strategy. Another strategy is loose coupling. The reason pagodas famously last for hundreds and hundreds of years in Japan is because they’re built with structures like, for example, central structures that are really strong, but also hanging structures that loosely couple and that can absorb, for example, energy from the earth when you get earthquakes.

So these design principles, if you think about a loosely coupled cyber or a piece of software system, and even of course decoupling things, right, so that you disaggregate capabilities, so that if a power system or a software system goes down locally, it doesn’t affect everyone globally — some of these principles need to be applied. They’re systems security principles, but they can absolutely be applied in AI. I mean, it’s amazing how effective people can be when they’re in an accident. They’ve got broken bones, they’ve got maybe damaged organs, and yet they’re still alive. They’re still functioning. How does that happen? And so nature’s a good inspiration for us.

We can’t forget, in the end, our company has a purpose for those who make the world. And that means that we have to be empathetic and understanding of the environments in which these technologies are going to go into and make sure that they’re intuitive, they’re transparent, they’re learnable, they’re adaptable to those various environments, so that we serve those makers of the world effectively.

Shervin Khodabandeh: Mark, as you’re describing innovation, I think your brand is very well recognized and a lot of our audience would know [it], but could you just maybe quickly cover — what does Stanley Black & Decker do, and how have some of these innovations maybe changed the company for the better?

Mark Maybury: Well, it’s a great question. One of the delights of coming to this company was learning what it does. So I knew Stanley Black & Decker, like many of your listeners will know, as a company that makes DeWalt tools — hand tools or power tools or storage devices. Those are the things that you’re very familiar with. But it turns out that we also have a several-billion-dollar industrial business. We robotically insert fasteners into cars, and it turns out that 9 out of every 10 cars or light trucks on the road today are held together by Stanley fasteners.

Similarly, I didn’t know beforehand, but in 1930 we invented the electronic door — the sliding door. So next time you walk into a Home Goods or Home Depot or a Lowe’s, or even a hospital or a bank, if you look up and you look to the left, you’ll notice — there’s a 1-in-2 chance there’ll be a Stanley logo, because we manufacture one of every two electronic doors in North America.

And there are other examples, but those are innovations, whether it be protecting 2 million babies with real-time location services in our health care business or producing eco-friendly rivets that lightweight electric vehicles [use]. These are some examples of the kind of innovations that we’re continuously developing. Because basically, every second, 10 Stanley tools are sold around the world — every second. And so whether it’s Black & Decker, whether it’s DeWalt, whether it’s Craftsman, these are household brands that we have the privilege to influence the inventive future of.

Shervin Khodabandeh: You’re really everywhere. And every time I sit in my car now, I’m going to remember that, like the strong force that keeps the nucleus together, you are keeping my car together. That’s fantastic. Can you give us an example of extreme innovation versus nonextreme?

Mark Maybury: Sure. By extreme, we really mean innovation of everything, innovation everywhere, innovation by everyone. We actually, interestingly, within the company delineate between six different levels of innovation. We’re just in the past six months becoming much more disciplined across the entire corporation, with a common rubric for how we characterize things. So it’s a great question. Levels one and two, those are incremental improvements, let’s say to product or a service. Once we get to level three, we’re talking about something where we’re beginning to make some significant change to a product. When we get to level four, we’re talking about maybe three major or more new features. It’s something that, really, you’re going to significantly notice. When we talk about a level five, this is first of a kind, at least for us. It’s something that we may have never experienced in our marketplace. Those we oftentimes call breakthrough innovations.

And finally, on level six, which are radical innovations, those are things that are world firsts. And to give you a concrete example, we just introduced to the marketplace the first utilization of pouch battery technologies, a successor to the Flexvolt batteries, which are essentially an ability, using pouch technology, to double the power — a 50% increase in power in batteries for tools, two times the life cycle, reductions in the weight and the size of those. So that’s an example of an extreme innovation that’s going to revolutionize power tools. That’s called Powerstack.

Another example we brought forward in Black & Decker [is] Pria. Pria is the first conversational home health care companion. It’s an example of using speech and language and conversational technology to support medicine distribution to those who want to age in place, for example, in the home, but also using AI to detect anomalies and alert caretakers to individuals. So those are examples that can be really transformative.

Shervin Khodabandeh: Levels one through six implies there is a portfolio and that there is an intention about how to build and manage and evolve that portfolio. Can you comment a bit [on] how you think about that and how much, like in level one versus level six, and what are some of the trade-offs that you consider?

Mark Maybury: That’s an excellent question. Basically, it is really market-driven, and it’s even going to be further product and segment-driven. If you’re selling a software service, you’re going to want to have that modified almost [in] real time. But certainly within months, you’re going to want to be evolving that service so that incremental modification might occur. We have an ability to just upload a new version of our cyber-physical end effector, if you will — whatever it happens to be.

But to answer your question, oftentimes companies will, over time, if they don’t pay attention to their level one through six — so from incremental all the way up to radical — they’ll end up with a portfolio that drifts to the incrementalism, that’s only focused on minor modifications. Those are easy to do. You get an immediate benefit in the marketplace, but you don’t get a long-term, a medium- or long-term, shift. And so what we intentionally do is measure in empirical fashion, how much growth and how much margin and how much consumer satisfaction am I getting out of those level ones all the way up to level sixes? Because any organization is going to naturally be resource constrained in terms of money, in terms of time, in terms of talent. What you need to do is you need to ideally optimize. And if the marketplace is rewarding you for, let’s say, having new products and services in class four, which have major improvements, but they penalize you for having radical improvements because they just can’t … it’s this cognitive dissonance: “What do you mean home health companion? I don’t know what that is. I just want a better tongue depressor.”

And so in that case, you really need to appreciate what the marketplace is willing to adopt, and we have to think about, if you do have a radical innovation, how are you going to get into the channel? And one final thing I’ll say, because your question’s an excellent one about portfolio, is, we actually go one step further, which is not only do we look at what the distribution of the classes are and what the response to those investments are over time, but we further, for any particular individual investment, we actually put it into a portfolio that characterizes the technical risk and the business risk. We actually use technical readiness levels, which come out of NASA and the Air Force — my previous life — and are used now in the business community, and then we invent it.

Actually, previously, when I was working for the federal government, we created commercial readiness levels, and I’ve imported those into Stanley Black & Decker. And now we actually have a portfolio for every single business and the company as a whole, for the first time ever. And that’s something that we’re really delighted to finally bring to the company — an ability to look at our investments as a portfolio — because only then can we see, are we trying to achieve “unobtainium,” because it’s technically unachievable, or, equally bad, is there no market for this? You may invent something that’s really great, and if the customer doesn’t care for it, it’s not going to be commercially viable. And so those are important dimensions to look at in portfolio analysis.

Shervin Khodabandeh: I’m really happy that you covered risk, because that was going to be my follow-on question. Even that must be a spectrum of risk and a decision: How much risk is the right level of risk, and how much do you wait to know whether the market’s really liking something or not? I’m not going to put words in your mouth, but I was just going to infer from that, that you were saying that’s a lever and a decision that you guys manage based on how the economics of the market and the company are, and when you want to be more risky versus less risky.

Mark Maybury: Absolutely. And there are many voices that get an opportunity to influence the market dynamics. If you think of the Five Forces model of Porter, classically you’ve got your competitors, your suppliers, your customers, and yourself, and all of these competitive forces are active. And so one of the things we try to do is measure, is listen. Our leadership model within our operating model at the company is listen, learn, and lead. That listening and learning part is really, really critical; if you’re not listening to the right signals — if you don’t have a customer signal, you don’t have a technological disruption signal, if you don’t have an economic signal, a manufacturing and supply signal, you need all those signals. And then, importantly, you need lessons learned; you need good practices.

Early in the idea generation side, are you using design thinking? Are you using diverse teams? Are you gathering insights in an effective way? And then, as you go through to generating opportunities, are you beginning to do competitive analysis, like I just talked about? As you begin to look into these specific business cases, are you trying things out with concept cars or proof of concepts and then getting to … “Maybe we don’t have the solution. Maybe we ought to have some open innovation outside the company.”

And then, ultimately, in our commercial execution, do they have the right sales teams, the right channels, the right partnerships to go to scale? And so the challenge is, we can oftentimes — whether it be manufacturing or products — we can get into pilot purgatory. We can create something that looks really, really exciting and promising to the marketplace, but it’s unmanufacturable, or it’s unsustainable, or it’s uninteresting or uneconomical. And that’s really not good. You really have to have a holistic intent in mind throughout the process, and then, importantly, a discipline to test and to measure and to fail fast and eventually be ready to scale quickly when something does actually hit, if you will, the sweet spot in the market.

Sam Ransbotham: So there’s lots of different things in these levels. Can you tie them to artificial intelligence? Like, is artificial intelligence more risky in market risk? Is it more risky in technical risk? How is that affecting each of your different levels, and what’s the intersection of that matrix with artificial intelligence?

Mark Maybury: Great question. Our AI really applies across the entire company. We have robotic process automation [RPA], which is off the shelf, low risk, provable, and we automate IT elements in finance, elements in HR. We have actually almost 160 digital employees today that just do automated processes, and it makes our own … we call it not only AI, but sometimes augmented intelligence, not artificial intelligence. How do we augment the human to get them more effective? So to your question, what’s risky —

Sam Ransbotham: That seems [like] less risk.

Mark Maybury: That’s very low risk. RPAs are very, very low risk. However, if I’m going to introduce Pria into the marketplace, or Insight, which is a capability in our Stanley industrial business for IoT measurement, for predictive analytics, for shears indoor extensions, to very large-scale excavation equipment, and so on — in that case, there could be a very high risk, because there might be user adoption risk, there’s sensor relevance risk, there’s making sure your predictions are going to work well. There could be a safety risk as well as an economic risk. So you want to be really, really careful to make sure that those technologies in those higher-risk areas will work really, really well, because they might be life critical if you’re giving advice to a patient or you’re giving guidance to an operator of a very big piece of machinery.

And so we really have AI across our whole business, including, by the way, in our factories on automation. One of the ways we mitigate risk there is we partner. We work with others so that they actually have de-risked a lot of the technologies. So you’ll see mobile robots from third parties, you’ll see collaborative robots from third parties that we’re customizing and putting to scale in our factories and de-risking them. So on that matrix, they’re much more distributed across the spectrum of risk.

Sam Ransbotham: One of the things that Shervin and I’ve talked about a few times is this idea how artificial intelligence maybe even steers people toward these incremental improvements. Maybe it’s the ability for these algorithms to improve an existing process [that] may somehow steer people toward the level one versus the level six. Are you seeing that? Are you able to get to apply artificial intelligence to these level five, level six types of projects?

Mark Maybury: We absolutely have AI across the spectrum. When it comes to AI, the stuff lower in the technical-commercial list tends to be commercially proven. So it tends to have multiple use cases: Others have deployed the technology; it’s been battle hardened. But the reality is, there’s whole a series of risks. And we actually have just recently published our responsible AI set of policies at the company and made them publicly available. So any other diversified industrial or tech company or other consulting small-to-medium enterprise can take a look at what we do. And I’ll give you a very simple example, and it gets a bit to your point of “Well, will they gravitate to the easier problems?” Well, not necessarily.

One of the areas of risk is making sure that your AI sensors or classifiers are in fact not biased and/or they’re resilient. And one of the ways you make sure they’re resilient and unbiased is you make sure that you have much more diversified data. That means if you have more users or more situations that are using your AI systems and there’s active learning going on — perhaps reinforcement learning while that machine’s operating, most likely human supervised, because you want to make sure that you’re not releasing anything that could adversely affect an operator or end user — actually, the more data you get, the better and more effective … the more risk you can reduce, but actually the higher performance you can get. So it’s a bit counterintuitive.

So you can actually become a bit more innovative in some sense, or just smarter in the AI case, because you have more exposure, in the same way that people who go through high school to university to graduate school, because their challenge is increased along those levels, their capacity to learn, to communicate, becomes much more effective as they go through that training. Same thing with a machine: You can give it easier examples, so they might be more incremental, simple challenges to that system, and as I get more difficult — so I go from the consumer, to the prosumer, to the pro — my intelligence of that system … because the pro knows a lot more. She’s been out working, constructing, for 20 years or building things in a factory for a long time and knows what kinds of learning that machine can leverage and can expose that machine to more sophisticated learning.

For example, for predictive analytics, if I want to predict an outage, if I’ve only seen one kind of outage, I will only be able to deal with that outage. If I’ve seen 30 different kinds of outages, I’m much better and much more resilient, because I know both what I know, but equally important — perhaps more important — I know what I don’t know. And if I see something for the first time, and I’ve seen 30 different things, and this is a brand-new one, I can say, “This doesn’t fit with anything I’ve seen. I’m ignorant. Hold up — let’s call a human. Tell them it’s an anomaly. Let’s get the machine to retrain.”

Sam Ransbotham: Where did these responsible principles come from? Is that something you developed internally, or is that something you’ve adopted from somewhere else?

Mark Maybury: So first, the motivation. Why do we care about responsible AI? It starts from some of my 31 years working in the public sector, understanding some of the risks of AI, having been on the government side funding a lot of startups, a lot of large companies and small companies, building defense applications and/or health care applications, national applications for AI. We recognized the fact that there are lots of failures. The way I think about the failures, which motivate responsible AI, is they can be your failure to … if you think of the OODA loop — observe, orient, decide, and act. Observation: You can have bad data, failed perception, bias, like I was suggesting. And so machines literally can be convinced that they’re seeing a yield sign when they see a stop sign. So there actually have been studies done that have demonstrated this.

Sam Ransbotham: Right, the adversarial.

Mark Maybury: Exactly: adversarial AI. You can also confuse an AI by biasing a selection, by mislabeling or misattributing things so they can be oriented in the wrong way. See, the classifications I talked about before: You could force them to see a different thing or misclassify. Similarly, they can decide poorly. They could have misinformation; there could be false cues or confusion. And we’ve seen this actually in the flash crash, where AIs were trained to actually do trading and then their model didn’t actually recognize when things were going bad, and poor decisions were made. And then finally, there can be physical world actions. We’ve had a couple of automated vehicles fail because of either failed human oversight of the AI, over-trusting the AI, or under-trusting the AI, and then poor decisions happen. And so that’s the motivation.

And then we studied work in Europe and Singapore and the World Economic Forum. In the U.S., there’s a whole bunch of work in AI principles, in algorithmic accountability, and White House guidance on regulation of AI. We’ve been connected into all of these things as well as connected to the Microsofts and the IBMs and the Googles of the world in terms of what they’re doing in terms of responsible AI. And we, as a diversified industrial, said, “We have these very complicated domain applications in manufacturing, in aviation, in transportation and tools, in home health care products or just home products, and so how do we make sure that when we are building AI into those system, we’re doing it in a responsible fashion?”

So that means making sure that we’re transparent in what the AI knows or doesn’t know; making sure that we protect the privacy of the information we’re collecting about the environment, perhaps, or the people; making sure that we’re equitable in our decisions and unbiased; making sure that the systems are more resilient, that they’re more symbiotic so we get at that augmented intelligence piece we talked about before. All of these are motivations for why, because we’re a company who really firmly believes in corporate social responsibility, and in order to achieve that, we have to actually build it into the products that we’re producing and the methods and the approaches we’re taking, which means making sure that we’re stress-testing those, that we’re designing them appropriately. So that’s the motivation for responsible AI.

Sam Ransbotham: What are you excited about at Stanley Black & Decker? What’s new? You mentioned projects you’ve worked on in the past. Anything exciting you can share that’s on the horizon?

Mark Maybury: I can’t go into great detail, but what I can say right now for your listeners is, we have some extreme innovation going on in the ESG [environmental, social, and governance] area, specifically when we’re talking about net zero. We’ve made public these statements that our factories will be carbon neutral by 2030. We have 120 factories and distribution centers around the world. … No government has told us to do that. That’s self-imposed. And by the way, if you think, “Oh, that’s a future thing; they’ll never do it,” we’re already ahead of target to get to 2030. But we’re also, by 2025, pulling in a little bit closer, we’re going to be plastic free in our packaging. So we’re getting rid of those blister packs that we all have gotten so accustomed to. Why? Because we want to get rid of microplastics in our water, in our oceans, and we feel that it’s our responsibility to take the initiative. No government has asked us to do this. It’s just that we think it’s the right thing to do.

We’re very, very actively learning right now about how we get materials that are carbon-free, how we operate our plants and design products that will be carbon-free, how we distribute things in a carbon-neutral way. This requires a complete rethinking, and it requires a lot of AI, actually, because you’ve got to think about smart design: Which components can I make to be reusable? Which can be recyclable? Which have to be compostable? The thing here is really to think outside the box. …

We’re a 179-year-old company, so we’ve been around for a while. And, as an officer of the company, my responsibility is as a steward, really, to make sure that we progress along the same values that Frederick Stanley, who was a social entrepreneur, the first mayor of New Britain [Connecticut], [who] turned his factories to building toys when there were no toys during the war for children … I mean, just a very community-minded individual. And that legacy, that purpose, continues on in what we do. And so, yes, we want high-power tools, and, yes, we want lightweight cars, and we want all those innovations, but we want them in a sustainable way.

Sam Ransbotham: Thank you. I think that many of your things about, for example, the different levels that you think about innovation will resonate with listeners. It’s been a great conversation. Thanks for joining us.

Shervin Khodabandeh: Mark, thanks. This has been really a great conversation.

Mark Maybury: Thank you very much.

Sam Ransbotham: We hope you enjoyed today’s episode. Next time, Shervin and I talk with Sanjay Nichani, vice president of artificial intelligence and computer vision at Peloton Interactive. Please join us.

Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn, specifically for leaders like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.


During their conversation, Mark described how categorizing the technology-infused innovation projects he leads across the company into six levels, ranging from incremental improvements to radical innovations, helps Stanley Black & Decker balance its product development portfolio. He also shared some insights for organizations thinking about responsible AI guidelines and discussed how Stanley Black & Decker is increasing its focus on sustainability.


ABOUT THE HOSTS

Sam Ransbotham (@ransbotham) is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for MIT Sloan Management Review’s Artificial Intelligence and Business Strategy Big Ideas initiative. Shervin Khodabandeh is a senior partner and managing director at BCG and the coleader of BCG GAMMA (BCG’s AI practice) in North America. He can be contacted at shervin@bcg.com.

Me, Myself, and AI is a collaborative podcast from MIT Sloan Management Review and Boston Consulting Group and is hosted by Sam Ransbotham and Shervin Khodabandeh. Our engineer is David Lishansky, and the coordinating producers are Allison Ryder and Sophie Rüdinger.

Content Source: MIT Sloan Management Review



RELATED CONTENT:

Desktop Metal Commences Shipments of Production System P-50 With Inaugural Customer Stanley Black & Decker

IN THE NEWS with Fastener News Desk the Week of February 28th, 2022

Featured, Technology

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.