All podcasts

The Interconnected Future Of Technology And Humanity, With Kate O’Neill

The future of technology is rapidly changing the world of work, which made many people fear that technology will replace them. But you don’t have to be afraid because it’s not about the robots and remote work; it’s about YOU. In today’s episode, Kate O’Neill, the Author of Tech Humanist and the Founder of KO Insights, delves into how human-centric digital transformation shapes the workplace and upskills humans. Explore the ethical implications of AI Automation and how to ensure technology serves humanity. Also, hear Kate’s insights on using tech to solve the world’s biggest challenges – with a human touch. Today’s conversation is something you don’t want to miss! Join Kate O’Neill and be inspired to shape your future in a technology-driven world.

 

Check out the full series of “Career Sessions, Career Lessons” podcasts here or visit pathwise.io/podcast/. A full written transcript of this episode is also available at https://pathwise.io/podcasts/the-interconnected-future-of-technology-and-humanity-with-kate-oneill.

Watch the episode here

 

Listen to the podcast here

 

The Interconnected Future Of Technology And Humanity, With Kate O’Neill

Author, Speaker, Strategist, And Founder Of KO Insights

My guest is Kate O’Neill. She is the Founder of KO Insights and is a prominent author, speaker, and strategist known for her expertise in technology, data, and human-centric digital transformation. She is often referred to as the tech humanist, and her work focuses on the intersection of technology and humanity. Highlights of Kate’s career include founding Meta Marketer, being an early employee at Netflix, and consulting and speaking. She is the author of five books, including A Future So Bright, Tech Humanist, and Pixels And Place. She lives in New York City. Kate, welcome, and thanks for doing the show with me.

Thank you.

You are another part of The Thinkers 50 crowd that I met last November 2023 when the conference was here in London. I’ve had some great conversations with people leading up to that and after it. People always wonder whether they should go to conferences. For me, that was a good investment of a day, essentially a weekday, and a little bit, Sunday to get to know a bunch of people that I otherwise probably wouldn’t have gotten to know. You and I were standing in line together and that’s how we met.

It’s a great topic for your audience because it’s about getting back out of what you put into it. What I found with that group, in particular, is that the real value has been in the follow-up after that event with many great conversations that took place in hallways or even at the coat check waiting to get your coat back, but it’s the Zoom calls and the get-togethers that have happened since then that have been full of value.

People are open to meeting each other at that conference more than at any conference I think I have ever been at, genuinely interested in caring about people and what they’re working on, and usually it’s much more superficial.

We’re all very curious about each other, but we’re also very interesting people. I think a lot of people at conferences are very interesting, but I think so often we’re there with a little bit of cynicism, like, “What does this person want out of me? Why are they trying to have a call?” Maybe taking a more curious approach would get us good value.

Very true. You are among that group of being very interesting and impressively accomplished. Tell us a bit about what’s keeping you busy right now.

I am working on a book. That’s keeping me very busy.

I figured it had been about three years.

Technology And Humanity

It was definitely getting into a rhythm for a while there every 2 to 3 years, I’d have a book come out. That would’ve meant that one came out in the fall of 2023. It would’ve been right on time. Since it didn’t, it’s been it’s been ruminating more. I’ve been having it swirl around with the ideas I’ve been working on with my consulting clients, speaking clients, and everyone that I speak with. It’s fantastic when I get to do a keynote speech and then I get to do Q&A with the audience. It’s real-time research on what’s top of mind for people. That’s been helpful to help me dial in on what it is I want to explore.

Do you want to talk a little bit about what you’re going to write your next book or is it secret?

It’s not a secret. I will be candid and say it doesn’t have a very good title yet. It’s in that working conceptual title stage. It’s almost like a synthesis of my last two books, Tech Humanist and A Future So Bright. It’s thinking about how technology that’s developed for business purposes has an impact on humanity and how we can make better decisions around that technology to make sure we’re not endangering the future with the unintended consequences we roll out, looking across that whole workflow of how do we analyze better what it is we’re trying to do and what technology can help us do. How do we make sure that we’re considering human factors when we’re doing that, when we’re pulling those things into business objectives, scaling them out, and then how do we make sure that we’re building that in such a way that it has the best chance of impacting humanity well as opposed to the best chance of impacting humanity badly which we would like to avoid.

I hear that from a lot of leaders. They’ll read my books or my keynotes and they’ll say, “These are compelling ideas. It’s what we want to hear.” It’s harder than ever to put these ideas into practice requires tools of discourse for the board and the rest of the C-Suite. It requires tools for decision-making and future analysis forecasting. It requires a whole suite of tools that you don’t get when you go to B School and certainly not when many of these leaders did. There’s a big gap between the ways that leaders have been accustomed to making decisions and the way that they’re increasingly being tasked to make decisions.

Career Sessions, Career Lessons | Kate O'Neill | Human-Centric Digital Transformation

CAPTIONS:
Human-Centric Digital Transformation: There’s a big gap between how leaders have traditionally made decisions and how they are increasingly expected to make them.

 

I think about back in the early part of my career where you certainly had the examples of some very big companies polluting the landscape. Think about the Aaron Brockovich story with PG&E, General Electric, and others that had some very big famous cleanup efforts that they had to do. That was the way we thought about consequences for the most part then was, “Are you poisoning people?” Now it’s much more complicated in terms of sustainability. It’s also much more complicated in terms of these data-centric businesses and what you do with all that data and what’s an ethical way to use it and what’s not an ethical way. I’m sure we’re going to get into all of that in the conversation.

The environmental cleanup examples that you cite are good parallels. I think about wildlife, the unintended introduction of different kinds of wildlife and what happens when they’re not compatible. These invasive species examples are such great parallels to what happens when we don’t think about data systems at scale, the mongoose in Hawaii is a classic

Rabbits in Australia. Kudzu in the American South, and all of those things right there. The landscape is literally littered with examples of ideas that seemed good at the time.

It seemed like a good idea at the time.

What about your consulting work types of challenges that you’re typically helping your clients with?

That’s fun. It’s a different thing each time. They are packaged in different ways, but the fun of it is that they usually do have some common theme of, “How are we thinking about the future? Thinking about the uncertainty of the future, we’re looking at a product suite or looking at rolling out a new technology feature, and how can we think about the ethical consequences of this? How can we think about the long-term viability of this? Is the marketplace changing in ways that we are not anticipating?”

Those are very fun projects. They’re very serious in many cases. A lot of times there are some very real fallouts and consequences that are at stake, but it’s a lot of fun to dig in, especially for very short periods of time, to get a chance to dive into something, tear it all up and go like, “Did you see this? This is a very interesting thing that you have right here in plain sight of your own organization.” They go like, “We did not see that.” That is probably the most fun part of my work.

Dive into something and notice what's interesting that the organization missed. Share on X

I used to love that part of consulting being able to go in, have to apply pattern recognition, figure out what’s going on in the business, and figure out a way to come up with some impact, useful insights for the client very quickly. Diving in and learning something new appealed to my inherent curiosity. Ultimately, I got to the point where I was like, “I like doing this, but I would like to see something through,” and that’s when I ended up shifting more into the corporate world because I wanted to execute some of the ideas, not just conceive them, but it was fun. You get a little bit addicted to constant variety when you start experiencing it so often.

The core of my work is in keynote speaking. That is constant variety. It’s week a different organization or association, and they have niche focuses. For example, I’ve spoken to the Nevada Mining Association. It’s a super interesting group. There’s a lot of ethical and environmental consequences. There’s a lot of technology that’s coming into play and digital transformation taking place in that industry. There’s a huge impact of future readiness there of making sure that they’re ready for the scale that lithium mining is going to require as we think about EVs and, so on.

California Water Environment Association is an incredibly complex organization thinking about, clean water futures and the technology it takes to get there, and the workplace transformation too, thinking about the silver tsunami, as they call it, of the aging out workforce. All of these different organizations have their own set of challenges, but that set of challenges draws from this super set of challenges that you see across all organizations and all industries. It’s fascinating to see those the same patterns play out being able to draw insights from across industries and share them, say, “I saw this play out in the energy sector a couple of months ago. Perhaps that’s useful to you here in this high-tech organization or whatever.” It feels very useful to be able to be a conduit that way for those kinds of insights.

You develop, especially when you start doing as many of those keynote speaking gigs that you do, the consulting projects that you do, you get exposed to many different things that you have the ability to stitch together seemingly unrelated things into something that turns out to be a theme that is relevant from one group to the next. Most of those groups don’t have any ability to do that for themselves because they don’t get the volume of different experiences that you’re getting in your work.

It’s hard for anyone, whether it’s you as an individual or an organization to look at themselves from the outside. Once you’re in it, it’s very hard to see it objectively. I think that’s true if you’re thinking about career coaching or leadership development, if folks are tuned into your podcast thinking about that in their careers, that’s something that benefits to have somebody outside of you showing you what you are.

I think that’s true, whether you’re thinking about how you’re progressing through the trajectory of your career, or whether you are inside an organization trying to figure out, “We think our brand is this. Our mission and vision is this,” but we don’t know what that looks like, feels like, sounds like or tastes like once it’s outside the organization, what the human experience of interacting with this organization is. It’s super valuable to be able to feed that back to people who are trapped in their own jar and can’t see the label on the outside.

Across all of that, the speaking, writing, and consulting work, how would you stitch it all together into what’s your why?

For me, the story has always gone back to being fascinated by technology and its capability to make human life better and more connected. For us to have more dimensional relationships with one another and ourselves, that’s been true for me since I was a child. I first thought about any technology, computer programming, or anything. I’ve been fascinated with humanity, the human condition, and what makes humans special all my life. I’ve been a linguist by education. I gravitate to this explanation around meaning-making and meaning-seeking.

Career Sessions, Career Lessons | Kate O'Neill | Human-Centric Digital Transformation

Human-Centric Digital Transformation: Technology can make human life better and more connected to have more dimensional relationships.

 

I find that what’s fascinating in the work that I get to do is the opportunity to think about tech-driven experiences as opportunities for meaning amplification or meaning acceleration. How do we find that connective tissue between what a business is trying to do at scale, what its objectives are, and what human beings on the other side of those transactions are trying to achieve? If we find that alignment, how can we use technology to enhance and amplify it, make it bigger and better? That for me is where the real opportunity comes into play, is using the relationships and tools in their proper perspective and orientation with one another so that all of the human experience is enhanced by that.

Was there something in particular that inspired you to coin the term the tech humanist?

Becoming A Tech Humanist

We were talking about taking these disparate ideas and finding the insight at the intersection of them. I have a tendency to be attracted to opposites attracting or, the both and ness of situations. I still encounter so much of people assuming that technology and humanity are at odds with one another. I can certainly see how and why that appears to be the case and why it is true in some cases, but I think that inherently it does not have to be. We are the creators of technology, and we have the opportunity to infuse it with our best selves and our most enlightened selves, and our most egalitarian viewpoints, and all of these wonderful attributes of ourselves and make it help us be better. To me, that was what the origin of that idea is how can we take what seems like contradiction and make it feel true, set up the model whereby it can be true and what does this instruct us about how we would make that true if we could?

How would you define some of the specifics of human-centric or ethical technology and when you’re working with your clients speaking or consulting, what are you telling them that they should be doing in terms of key principles?

One of the biggest factors right now, especially with AI, is such a big topic at the moment for these last couple of years since ChatGPT came on the scene and everything exploded in that space. I’ve been talking about AI for over a decade, but it wasn’t the same conversation as it is now. What I think is important now, more than ever, is that when we use tools and technologies that are in the AI umbrella, they bring with them capacity and scale like we’ve never experienced before.

When we use tools and technologies in the AI umbrella, they bring capacity and scale like we've never experienced before. Share on X

That means that the consequences of our actions and decisions are bigger. What I try to encourage clients to think about and to take action on is that through line thinking, how do they take the actions and decisions of yesterday, and today and make sure that they harmonize with what they want to have happen in the future? How do they think about ethics feels like such a big, important, and fuzzy word? I think if we think about impacts, how people are interacting with these tools, the individuals, the actual people who are trying to navigate their way through our banking systems, hospital systems, or whatever the case may be, it gets a lot CRISPR. It gets a lot clearer what it is we are trying to do and what it is we need to be careful about in the experiences that we create for people.

It’s a little ironic that you used the word CRISPR there because that’s another situation where there’s been a lot of debate about what’s an ethical use of a new technology.

CRISPR is a fun example for me that I bring up often when I talk about this dichotomy that we often have in how we think about the future. We’ve only been told to think about the future in 1 of 2 ways. It’s either dystopia or utopia. Most people would not accept the fact that nobody agrees that utopia is on the table. Pretty much everybody would be like, “That’s not going to happen.” Utopia isn’t part of that dichotomy to begin with, which means there’s only one way to talk about and think about the future, and that’s dystopia. That’s an unhealthy, unhelpful, and unproductive way to think about the future. It means that every technological or scientific advance, like CRISPR gets framed automatically in this realm of, “How bad is that going to be?”

It could absolutely be bad. We need to accept, acknowledge, document, and work through those risks and harms, and make sure that we mitigate them, reduce them, and do what we need to do to manage those risks, while at the same time saying, “How powerful that could be? What do we need to do with that to enhance the quality of life for everyone on this planet now and into the future?” That’s what the responsibility is with something as powerful as CRISPR and generative AI.

Any of these tools and technologies has this dualistic nature that to date, we’ve tended to mostly focus on thinking that we prepare for the future by holding it at arm’s length and going, “Don’t let it get out of hand. This is going to be bad if we let it get out of hand.” It’s like, “Yes, it will. That’s incredibly important to acknowledge.” What would it mean to put the right guardrails in place, to use them responsibly and to think about the consequences downstream, and then use them well? What would that look like?” We rarely go through that side of the exercise, and that side of the exercise is incredibly important.

It’s not going to be dystopian. I’m a big fan of dystopian books and TV shows. We’re unlikely to end up in a purely dystopian outcome with all of these technologies, CRISPR, AI, nuclear energy, you can pick all sorts of them probably going back close to 100 years at this point these industries with massive potential for good or bad and the guardrails. We didn’t think a whole lot about guardrails in the early going. That’s why we ended up with nuclear bombs and chemicals that killed people. We did a little bit of course correction as a result of that, but now in this day and age, AI’s a great live real example of how do you make sure that you use it to make people’s lives better and not use it to make somebody richer.

Human-Centered Technology

The truth is that we will go wrong. You said we probably won’t end up in a horrible dystopia. I think that what that says to me is, if we were going to, why wouldn’t we already have? If you ask anyone, “Are we living in a dystopia or utopia today?” Most people will say, “It’s neither. It’s a little bit of everything.” I don’t understand why we imagine that it’s all of a sudden going to be anything other than a little bit of everything anytime in the future. It’s always most likely going to be some good, and some bad, things that we can manage, things that feel like they’re out of our control. Our best selves are the ones who are invested in learning from our mistakes, missteps, oversights, and building better and guardrails and building better lessons for ourselves and the future.

There’s this great quote if you’re a student of dystopian fiction, you may have run across it. Neil Gaiman wrote the line, “The difference between comedy and tragedy is where you stop telling the story.” I’ve always loved that quote. To me, there’s incredible wisdom there that nothing is ever done. A few things are ever done. There’s so much circularity to life things. Get reinvented. Things get brought back. We always have the opportunity to say, “It’s not over yet. We can still take some lessons from where we’ve gone wrong. We can still build some new insights into this. We can still put some appropriate protections in place.” I finished writing a section referencing, as many people will be familiar with the Amazon AI recruiting tool of the mid-2010s, 2014 to 2018 timeframe.

It makes sense. They’re a company that has a headcount of over 500,000 employees globally. If you’re in charge of recruiting for a company of that scale, obviously you’re going to want to create some efficiencies for yourself, but they think, “Let’s use some AI tools to figure out what resumes look the most qualified. We’ll feed it the last ten years of resumes. We’ll look for patterns because not only do we have those resumes, but we also know which of them we hired.” Now we even could say we know which of those are top performers so we can see this through line of who’s good and look for those patterns.

Unfortunately, then what happens is we’ve tended to hire men because tech is a male-dominated field, “Now the AI is looking for signatures of women like women in a college name or a college that’s known to be women-only, penalizing that resume and promoting male resumes.” We go in and we try to correct that. We look for those words, we can edit them out and start fine-tuning them. The thought has already occurred to us, like, “We are probably never going to be able to remove all of the bias from the system because we have human bias. That’s part of the data set we’re training it with because there are going to be flaws in the society around us. Does that mean that the tool is useless?” I don’t think so.

I think that they stopped trying to build it as an absolute recruiting source in 2018. They certainly acknowledged that they had the opportunity to use it as input to their hiring process all along. That kind of thing is where I think we start to look for the opportunity to say like, “Where can tools that may be flawed still be part of a system that overall we use human in the loop kinds of ethical processes to make sure that we have transparency and we have accountability? We’re making good decisions with contextual awareness. We’re bringing the awareness of real-time cultural context into it.”

Those things don’t have to be overriding. We don’t have to have AI overriding our own decision-making. We can be using these things in concert with what we know, where we know we need to, build out at scale. These are not easy decisions, and they’re not easy to overcome. I don’t think if we stop expecting it to be easy, we build more around complexity and build for scale, then we have the opportunity to do well with it.

You are on balance tech optimist.

I am cautious about the term tech optimist. That is the term that someone like Marc Andreessen or Sam Altman would use. They’re pretty leaned into the idea that tech can solve everything. I would call myself a tech humanist and a strategic optimist. In my mind, tech is going to be a part of the answer, because it has to be. In order for it to be good for us, we need to be thinking human first. We need to be very strategic about how we bring these questions to bear. How we make sure we’re being circumspect and realistic about those limitations that we are acknowledging those risks and harms, and we’re building out for the best-case scenario at the same time.

The future is so bright.

It’s funny, the name of my book is A Future So Bright. I think that was a little too subtle because conditional. The future could be so bright if we make the right choices. Let’s build a future that is so bright by making these better choices and by leaning into this more modeled strategic optimism. That was the model I introduced in that book, was strategic optimism and the idea that what we need to approach the future is this optimistic way of thinking about the possibilities coupled with a rigorous strategic discipline that says, “We have to be able to break this down into its components, mitigate those risks and harms, and build out the stuff that we know we’re capable of.” That’s not going to be simply about saying like, “Tech is the answer. Go build it. It’s going to be a much more rigorous process than that.”

I blame you because I’ve had that song in my head all day.

It’s often my walkout music when I speak and it is a very quirky song to try to walk on stage too.

It’s not like it’s got this natural beat to it.

It has a very weird left-footedness to it. You’re like, “What am I supposed to be doing right now? I was at an event recently, and I happened to mention that I love Beyoncé. They played Crazy In Love as my walkout music. You cannot not strut on stage to that song. It is a good walk-on song. I’m like, “I’m thinking about this all wrong.”

Some of my former coworkers and I were doing final prep for a conference. This probably was many years ago. We had the most hilarious afternoon trying to figure out what everybody’s walkup music was going to be. It was such fun. It’s not something that you get to do all that often, although you do.

Some people like to use She Blinded Me With Science as they walk on, but that is not an easy song to walk onto or Human by The Killers. That has a good atmosphere, but once the lyrics start, it’s like, “This dropped off in energy.” It’s a tricky business that has walk-on music.

Human League. You go way back to that song. We’ve done enough on music. That’s not the subject of this show. We’ve talked about AI, but what are some of the other ones that you think are going to have the greatest impact on society over the next 5 to 10 years?

Augmented Reality

I am a huge advocate of augmented reality. I think augmented reality has not yet seen its heyday. I think what Apple invested in with the Vision Pro is just the beginning and that headset obviously is still in the upwards of $3,000 range. It’s not meant for the consumer market yet. When they rolled it out with that price point, it was clearly meant for the developer market, the groups that are going to buy them as part of their research and development, and figure out the applications that are going to be made for it.

Once there’s an ecosystem of apps and tools, then it’ll hit a price point of $2,000 and under when it’s more of a consumer product and has a rich landscape people can use. I think there’s a tremendous opportunity that we’ve seen some dabbling Google Glass existed and some other tools, some wearables were out there, but I think they were ahead of the ecosystem too. We just didn’t have the right set of problem-solving applications that were out there.

Think about that technology because the implication of it is you are in the world, still connected to the world, still seeing what’s around you, but you have a layer of content that’s just in time and relative and relevant to where you are and what you’re doing, and augmenting your experience with whatever it is you’re looking at. If you combine the idea of computer vision that’s aware of what it is you are seeing and imagine combining that with ChatGPT or any of these generative AI chatbots that can provide you with some context like if you’re curious about the history of a building that you’re looking at, and all of a sudden you have, some summary bullets about when that building was built, what famous things happened in it, or whatever.

If you’re touring around and getting to know a new city, that’s tremendously interesting. It has much more impactful applications than that. When you think about healthcare and the opportunity for doctors or surgeons to be able to have these augmented technologies helping with assistive information, maybe the patient’s vitals and information right there in front of them on a dashboard. If you think about a tech environment or an electrical engineering environment, some guidance to the wires that are in front of you or the circuit board that you’re working with. The amount of information that we constantly process in front of us all the time and don’t think about how much of that we either have to know, look up, guess, or do by trial and error. There’s a tremendous amount of intervention that could come from having these tools assist us.

Career Sessions, Career Lessons | Kate O'Neill | Human-Centric Digital Transformation

Human-Centric Digital Transformation: There’s a tremendous amount of intervention from having the tools to assist us.

 

AI hasn’t had its heyday yet. It hasn’t had a heyday. If it’s going to have one, it’s yet to come. You think about Google Glass. My recollection of Google Glass is it went from being something cool and then all of a sudden it was like, “You could put on those glasses and know that you’re looking at Kate O’Neill or know that you’re looking at somebody else, a stranger walking down the street and it got everybody spooked about privacy.” I don’t think there was a great answer for that. It sank it. At least, that’s my recollection. There are ways. You can use these things for good. You can use them for not good. It’s how you put the guardrails around some of these things as you roll them out.

Google Glass

Some of the considerations with Google Glass had to do with reputation and perception, the glass holes reputation that people were getting. That is certainly true as a limitation. We have that consideration all throughout emerging technology. What it takes to make any of these technologies work effectively is data. Data is nothing if not sensitive and private. There’s a tremendous amount of data that exists online, in public or near public that’s accessible through crawls that people never intended to be publicly consumable and trainable for AIs. That’s happening left and right, but I think this going to be okay if we can put the proper protections in place. There we’re scrambling to go backward on that and retroactively fit some of those protections into place around IP around individual consumer protection and data privacy.

Those things are tricky. I’ve been part of some meetings with the United Nations AI Advisory Board and thinking about things like IP. It’s tricky to try to imagine the right scenarios, regulations, and various protections that fit one scenario that is not an overreach for another or that aren’t impossible or not going to work and feasible in another environment. It would be so much easier if we always had protections and regulations in place before technology got to scale, but it just doesn’t happen that way. We have to be in this business of looking backward and going like, “That’s too much. We probably shouldn’t let the cat that much out of the bag. Now what do we do and how do we get that to be in the right place?”

That’s a recurring theme, regulation and struggles to keep up with new technologies. We are now seeing a plethora of data privacy laws enacted around the world, but for all practical purposes, this has been a problem for probably many years. They’re catching up with that. I think the next big one they’re going to have to deal with is whether it was legal for an AI-crawling thing to consume every bit of stuff that you’ve written that’s publicly available and basically disintermediate you.

Part of these discussions and these considerations is that has already happened. People are already extracting value from these training sets. Now what? Does there have to be some invention of a business model that then goes back and retroactively compensates people fractions of pennies at a time for the use of their IP in these training sets or is that not practical? What are we going to see? We have yet to see how that’s going to take shape. It’s a fascinating space. I’m not an expert in the regulatory space or the policy space. It’s one that I have to have a certain familiarity with because it dovetails much into these recommendations for ethical and responsible tech.

It’s mostly instructive, especially as far as your readers are concerned when you’re thinking about the set of tools and the set of skills that you have and how you take them to the next level, it’s interesting to know that there are things that you don’t necessarily have to be an expert in. You just have to have enough of curiosity and interest about how they intersect with what you are an expert in and look constantly for ways to strengthen that relationship between those things. How to make sure that you’re adding the most value at the intersection of the area where you are an expert, or where you feel like you are adding the most value, and where there’s an opportunity to bring that outside expertise into the discussion.

You don't necessarily need to be an expert in something. But you should have enough curiosity and interest in how it intersects with your expertise. Continually look for ways to strengthen the connection between these two areas. Share on X

I could talk about tech all day, but given that this is a career-focused show, we should probably at least talk about other topics. I’ll go to the future of work next, which I know you also write and speak about. What do you think will be most different about the future of work relative to what we’ve had maybe over the last many years?

The Future Of Work

We’re still seeing this shake out and it’s been impacted so much by COVID which has played such a huge, disruptive role in the whole thing. Pre-COVID, I remember I wrote this piece, and it was a fairly novel observation and argument that we tend to blur the language around the future of work and the future of jobs when it’s very helpful to disambiguate those, to think about what it is employers mean when they’re talking about the future of work because often that means the workflow and processes and how they’re going to assign job functions and carve out headcount and things like that. Whereas on the side where people are talking about the future of jobs, that’s very often a very human-centric question. It’s an individual wondering, “Will I have a job? If so, what will that look like? Is that going to be disrupted by technology, automation, AI, or robots? Is it going to be disrupted by remote work? Is it going to be disrupted by outsourcing and offshoring to other countries? What are these big implications?”

It’s useful to disambiguate those because those are very separate existential questions that obviously have incredible overlap. To those two, though COVID shone a light on a third question, which was the future of the workplace. That became a real hot topic as we thought about remote work and hybrid work. These are now in a place where post-pandemic, we’re starting to find some surer footing around what it looks like to have hybrid workplaces and remote workplaces, but we’re also seeing a lot of recidivism of CEOs putting their foot down and demanding that people go back in office.

There’s still a lot of shakeout to be done around that. That has a tremendous impact on how we bring automation into the discussion and AI, which was obviously already part of the discussion, but it’s accelerated in these last few years post-ChatGPT and the rise of generative AI in general. Overall, it looks like we’re seeing this disrupted, modular thinking about some companies and for some companies and how there’s a K-shaped recovery in the economy overall. For some consumers, things are looking great. They’re spending a lot of money. For some consumers, they’re tightening the belts. It’s like feeling recessionary. In similar ways, I think we see that across the workforce.

We see organizations that are leaning fully into remote and hybrid post-pandemic, and making the most of the efficiencies and advantages of that, bringing automation in as an accelerator of those teams and making sure that AI helps them function well. We talked a little bit about my virtual note-taker that’s on our discussion with us. Those kinds of tools are a big part of that process. There are the types of environments where there’s a lot of disruption to the job functions. There are a lot of newsrooms for example that are using a lot of displacement of editing functions or reporting functions with automated intelligence, synthetic intelligence, and generative AI.

Career Sessions, Career Lessons | Kate O'Neill | Human-Centric Digital Transformation

Human-Centric Digital Transformation: Organizations embracing remote and hybrid work models post-pandemic can maximize efficiencies and advantages by leveraging automation to accelerate team workflows and ensure AI effectively supports their operations.

 

We don’t know what that’s going to look like over the years to come. What that stands to do on that side of the decision-making line is reshape the job roles and the relationship to freelancing, to the open market of the gig economy and that thing. There’s a lot of moving parts in that space. It’s hard to talk about in a cohesive or coherent way because they’re different. It was many different pieces, one from the other. It’s important to be able to analyze them discreetly as individual components because they do play out differently from one company to another. As I sit backstage before a keynote at a company and talk to the company CEO and the CEO at one company that on some levels looks an awful lot like another one I may be speaking at, they have roughly the same number of employees or revenue scale or whatever it might be.

One of them is adamantly getting back to work in the office. One of them is taking a fully remote approach, and it feels different in culture even. You feel how that plays out, and what decisions they’re making and talking about as regards how AI comes in the discussion and how they’re thinking about things like self-care and all of these different facets of the workplace. It’s not an easy thing to give a pat answer to, which is why mine is very long-winded.

If you think about some of the things you’ve covered, you’ve got one this power struggle going on between employers and employees about flexibility and remote work. I’d never thought I would see this in my lifetime or return to unionization. It felt like it was dying. You’ve got that, and I think the whole hybrid remote thing is very much tied up in that. You’ve got all these new technologies coming in. You’ve got macroeconomic forces that affect things like offshoring for example. You’ve got geopolitical threats that have people thinking differently about what they do and sustainability impacting this. How you throw all that into the mix and say, “Here’s my advice for you on how to future-proof your career.” I think the only thing I feel like you can say is, “Stay adaptable.” Keep an ear to the ground and stay adaptable because it’s hard to know how all these things are going to play out.

Those are literally the words that I would use. You use this phrase, future-proof, how to proof future, proof your career. I have a talking point that I sometimes make about how I hate the term futureproof. It’s a funny word because you can never be future-proof. You can’t do that. The future is coming whether you like it or not. What you need to do is be more or less future-ready. There’s a future readiness that we can all stand to do better, and that’s not to ding on you. You teed me up perfectly with that word.

It’s a bit of a throwaway term. You can’t really completely future-proof yourself. It’s impossible. You have to be as ready as you can be. The reality is for some people, that’s going to be very difficult. The manufacturing industry saw it when their jobs got automated, their jobs got moved to foreign countries, and the white-collar workforce and office space workforce saw it when their jobs got offshore. Some of the things going on have the potential to affect a very broad swath of the world’s workforce. At the same time, we can’t just unemploy everybody, then the world collapses. Clearly, there’s going to be some governing limiting factor on this that prevents too much from happening too quickly.

It’s important to acknowledge that there are responsibilities at different levels. At the individual level, their responsibility is to stay adaptable. Make sure that you’re learning as much as you can about the different kinds of technologies that are coming out and becoming dominant in the industries that you’re interested in. That doesn’t mean that you have to become a programmer. I always like to make that distinction. I think people sometimes assume that when we talk about becoming familiar with technology, that means, “I have to be a programmer now.” You do not have to become a programmer.

Stay adaptable. Ensure you know you're learning about the different technologies coming and becoming dominant in the industries you're interested in. Share on X

“I can write programs now.”

AI can program better than most humans. It has to do with just not being intimidated by the technology that is mainstream and that’s going to be common in the workplaces that you’re interested in being part of. I think the parallel or the companion to that is that corporations need to be invested in making sure that to the extent that they want to keep a talented workforce as part of their teams and culture they are building that culture as a resilient culture, using upskilling and reskilling. I think it’s important here to point out that some of the most useful and effective upskilling, and reskilling programs are happening at state and city levels.

These are government investments. I saw examples that I wrote about in A Future So Bright in Nevada and Nebraska, for example, that both had these clearing houses of jobs that were available and sets of onsite training so if someone could go to a public library and sign into a state database that would give you like MOOC online courses that could give you modules to train you to be ready for the types of jobs that were coming available. Those kinds of opportunities are enormous then they come right back around to that individual level. If those resources are available, you have to take advantage of them in order to stay ready to be adaptable, know what’s out there, and know what skills you need.

You have to continue to invest in yourself. I think for a lot of people, one of the core beliefs I have as it relates to careers is you have to take ownership of your own career. That means you have to be willing to keep learning and keep adapting. If you want to go to work every day and do the same thing and not have to think about it, and that’s going to be challenging. There’s a set of forces that are constantly putting pressure on whether that status quo can continue to exist. I think that’s a hard reality for a lot of people.

I assume that most people who are reading this discussion are not people who want to phone it in and go meaninglessly and thoughtlessly, to be employed somewhere. They want to have meaningful, thoughtful, and challenging work that inspires them and lifts them up. The tragedy of the situation is that people who are motivated that way are feeling existentially threatened by the types of displacement and replacements of job functions that we do see in some cases with AI. I don’t think that all is lost there either. I think that we have seen time and again, this reinvention of new job types around the automation that’s been introduced.

I don’t think that we’re at the end of this cycle. I think that we are seeing far more new jobs created than people are giving a lot of credit for. They may not be net new jobs. We may be seeing job losses in some cases, but I think that we still are seeing an awful lot of creativity and a lot of innovation happening around the types of job roles that are out there. You have to stay curious, stay creative, and start reimagining yourself as being plugged into these new stories that are out there.

Career Sessions, Career Lessons | Kate O'Neill | Human-Centric Digital Transformation

Human-Centric Digital Transformation: There is much creativity and innovation around the types of job roles out there. Stay curious and open yourself to the idea of a new career path, exploring the exciting new opportunities in the world of work.

 

Kate’s Future Plan

Last question. What’s ahead for you over the next few years, other than getting this next book done?

I definitely plan to continue exploring this intersection of tech and humanity in the future. I don’t think there’s anything more important aside from the climate right now. I think climate is our most important urgent task to figure out. In my mind figuring out emerging technology and its relationship to humanity is part of solving that problem.

It’s a form of sustainability.

If we figure out how to harness what these technologies are capable of, there’s no limit to the problems that we can solve with them. That’s not to say it in a techno-optimist way. That’s saying we definitely need to do that responsibly. We need to do it in proportion in alignment with society, with human values at the core of that work. That’s what drives me. I want to make sure that I’ve done what I can to help influence that discourse so that those decisions are being made fully with responsibility and with human centricity because to me there’s nothing that has more power and potential than that.

There's no limit to the problems that we can solve with it if we figure out how to harness what these technologies are capable of. Share on X

We’re going to have technology. Hopefully, we’re going to have humanity too. There’s always a need to think about how those two interplay in many ways you’ve got a perpetual job in life to continue to think about what’s new and how we manage our way through it.

As parting thoughts for folks, that’s a good exercise. If you haven’t thought about what your lifelong passion or obsession is that will constantly reinvent itself. Why not? It’s a great exercise for some rainy day, weekend, or time when you’re sitting at a coffee shop asking yourself some deep-provoking questions. What is it that you could continually reinvent and go back to the source and say, “How could I continually recreate new value and solve more problems and add more to the advancement of this topic?”

It’s good homework for anybody who’s made it this far in the discussion to take away. Looking at my questions, I think we got to about 20% of them.

We have to do five more interviews.

I’ll put you back in the rotation at some point. That would be great. Thanks for doing this. I appreciate it. It’s been a lot of fun.

Cheers.

I would like to thank Kate for joining me to discuss the future of technology, work, and what it means for our career journeys and how our own careers unfolded. If you’d like to focus more on your own journey, visit PathWise.io and become a member. Basic membership is free. You can also sign up on the PathWise website for our newsletter and follow us on LinkedIn, Facebook, YouTube, Instagram, and TikTok. Thanks. Have a great day.

 

Important Links

 

About Kate O’Neill

Career Sessions, Career Lessons | Kate O'Neill | Human-Centric Digital TransformationKate O’Neill is the founder of KO Insights and is a prominent author, speaker, and strategist known for her expertise in technology, data, and human-centric digital transformation. She is often referred to as the “Tech Humanist,”, and her work focuses on the intersection of technology and humanity. She advocates for the responsible and ethical use of technology to improve human experiences and social outcomes.

Highlights of Kate’s career include founding [meta]marketer, a digital strategy and analytics firm; being an early employee at Netflix, where she worked on customer experience and product development; and consulting and speaking, where her clients have ranged from start-ups to Fortune 500 companies, and her topics have included digital transformation, AI, and the future of work.

Kate is the author of five books, including A Future So Bright, Tech Humanist, and Pixels and Place. She is a contributor to magazines such as Forbes and Wired. She earned her Bachelor’s degree in German at the University of Illinois – Chicago and her Master’s in Linguistics at San Jose State University. She lives in New York City.

 

Share with friends

©2024 PathWise. All Rights Reserved
chevron-down