StarCIO Digital Trailblazer Community - Confidence to lead, community to advise

Coffee With Digital Trailblazers
Coffee With Digital Trailblazers
From Digital Leader to Frontier Firm: AI and Governance Strategies
Loading
/

Participants

Episode Summary

This week’s episode of “Coffee with Digital Trailblazers” focused on the concept of “frontier firms” – organizations that are embracing AI and agents to transform their operations. Key points include:

Effective AI governance requires a holistic approach that aligns strategy, operations, and data/security management, rather than just policy-based controls.

Frontier firms are blending machine intelligence with human judgment, creating AI-operated but human-led systems.

Adopting AI and agents requires careful governance and collaboration across IT, security, legal, and business teams to manage risks and ensure reliable, secure operations.

Leading organizations are using AI agents for tasks like software development, compliance, and customer service – with humans maintaining oversight and control.

Transcript

Isaac Sacolick:

Greetings everyone. Welcome to this week’s Coffee with Digital Trailblazers. I’m just going to give this a few seconds to get a bunch of people here and we’ll get started with a very interesting topic with a very interesting guest and just sit tight. This is our hundred, is that right? Hundred 35th episode. Holy cow. Time flies by and I met a whole bunch of new people this week that I’m excited to have on the program and be a part of our community of Digital Trailblazers. If you are here, please do say hello in the common stream and I look forward to this conversation. We’re you going to be talking about from digital leaders to Front Frontier Firm? And I’m going to let our special guest, Neraj Dani, who’s the CEO of net woven, tell us a little bit about that when we get started. We are just waiting for getting a few more people here and then we’ll start with our conversations today.

Hello Gloria. Welcome. Got Derek on here on the common stream. We’ve got Neraj. Who else is here today? Everybody do say hello. I want this to be an open conversation. Hello Dennis. Thank you for joining. You’re all welcome to comment. I will call out some of the comments that I think are useful. We might put some of our comments into the actual whiteboard as we start creating it. Hello Jay. Jay is a good friend, StarCIO member, an X Microsoft. I’m just going to say an ex Microsoft TA copilot experts. Great to see you. Hello Michael. Thank you for joining from Atlanta. Michael, I think you know this, I’ll be speaking on Atlanta. The new date is September 10th. This is an event at the Microsoft Center that net woven is sponsoring and I am very excited to be there. So Michael, if you don’t have the access link to get to that, do let me know.

I’ll make sure you get on the list to be able to come see that program on AI governance in about a little bit over a month. Folks, thanks for joining this week’s conversation from Digital Leader to Frontier Firm AI and Governance Strategies. Today’s episode is sponsored by net woven. We’ll have a little bit more about net woven as we get into the conversation and Niraj and I are good friends as are a few of the folks over at Net woven. I first met Chris Wilkinson, I dunno, a whole bunch of years ago we stayed in contact. He was a big fan of driving digital and he had me come and speak at an event for Microsoft all the way out. What was his February again? A net woven event and we spoke about AI governance and strategies and things like data security, posture management, and just getting more organizations ready for this area of ai. And believe it or not, back then agentic AI and AI agents was just on the tip of our tongue. We weren’t even really talking about them. And here we are today and you can’t stop hearing about them. So I’m really interested in seeing where this conversation goes today. Neraj, welcome to the floor. Thank you for sponsoring this week’s episode. And tell us a little bit, help us define what a frontier firm is and how digital leaders can guide their organizations on AI transformations. Good morning, Raj, and welcome and thank you for being here.

Niraj Tenany:

Thank you Isaac, and thanks everyone for joining this morning. You guys hear me fine?

Isaac Sacolick:

Yes.

Niraj Tenany:

It’s a very interesting time in the age of AI these days. I did my first event on AI and governance back in November of last year, and I look back and see just in the last six months how much has really changed in the AI space. You talked about Frontier Firm, this was something that was actually coined recently back in April. Jensen Huang actually, as you probably, I’m sure you all know him, he identified four waves of ai wave one being the perception wave, wave two being the generative wave, wave three being the reasoning wave and wave four being the physical wave. We’re in this what we call the reasoning wave where these agents are becoming predominantly important and really what is really going on is that in this a new organization blueprint is emerging for companies to operate and work. And in this blueprint it blends machine intelligence with human judgment and that is where AI and the human coordination is moving forward. The systems are getting built that are AI operated but human led. And this is the emergence that we are seeing with organizations and Microsoft has coined this term for the frontier firm of these organizations. So that’s really what we are seeing for these frontier firms that are being operated by humans and agents together.

Isaac Sacolick:

So tell me, it sounds a little bit like not being on the bleeding edge, but at least being an early adopter and going beyond just experimentation but actually starting to realize value from ai. Maybe speak to a little bit about why it’s important for organizations to be ahead of the curve and become a frontier firm.

Niraj Tenany:

This is like any curve technology curve that we go through. If you remember when Twitter came out, the first few conversations people had were commenting on simple things and then it became a very potent business tool and we overseeing the emergence of the same thing happening in the frontier firm concept where right currently the first wave was around the use of AI assistance to do simple tasks. Now we’re seeing this emergence of getting complex tasks done by the so-called AI agents, but obviously led by humans and it is sort of moving into a doing more complex and involved task for organization. So that’s what we are seeing currently.

Isaac Sacolick:

Thank you Niraj. I’m going to pass the mic over. Let’s see, let’s go to Joe first this time this week and then see who raises their hands up. Joe, we’re always looking at emerging technologies. I don’t think we could call AI emerging or even generative AI emerging, but a lot of the things that companies are trying to do with AI still feels like emerging. And what is your sense on, how would you describe the level of urgency and importance to be a little bit on the frontier when it comes to AI technologies?

Joe Puglisi:

Well, there’s obviously a lot of FOMO companies that are looking at articles published every day, news stories and all sorts of things swirling in it. As you would expect. The old airline magazine, the proverbial airline magazine story, every board, every CEO is asking what are we doing with ai? So there’s pressure from the top, but I just want to make a point about frontier organizations. There’s an old adage about how you could spot the pioneers in the early days. Those were the guys with the arrows in their chest. So I’d love to hear from Niraj about how, and I’m sure we’ll get into this, how we avoid the pitfalls of being too far out front because that is a real danger and one that we have to be very cognizant of.

Isaac Sacolick:

Nash, you want to comment on that a little bit?

Niraj Tenany:

So with every technology inflection point we face this anxiety as, and it all kind of links back to the strategy because AI in many ways is very different and people really have to embrace it sooner than later. However, having said that, it’s no different than any other technology inflection curve that we’ve gone through where there will be an initial hype period where you’re testing it, you’re trying it, you’re figuring out what works, what doesn’t work. Then you are identifying certain use cases and business processes where you want to automate and you want to take it to the next level. So we’re seeing a similar concept as we have seen in previous inflection points being applied to the AI infection AI curve here. The only difference this time is that the rate of acceleration is rapid. I think if you look at, I’m sure you recall some of the statistics from OpenAI and others in terms of how fast the adoption has occurred. That is where I think we’re seeing a sea of change. So the implementation phase and the working with organizations and the CEOs looking at how to implement it, the process remains the same. However, there is rapid acceleration.

Isaac Sacolick:

I want to comment on that. Niraj, people who look at my background in transformation, what I tell them is I got a front seat to it. In 1996, I joined a company helping newspapers going from print to web and we were doing everything from editorial to classifieds. Every major business line ended up becoming digitized and user experience workflow changes. 96 through 2000, one was a wave, 2001 was another wave. We’re talking multiple years of the newspaper industry being able to adjust to disruptive technologies. And most of you who are listening know it doesn’t really end well for that industry. They didn’t challenge their status quo, they didn’t c challenge their operating model, their business model, their way of interfacing with customers, and that was over a pretty significant period of time. We’re talking now agents is really about a year old, large language models, about three years old and there’s a whole new wave coming around. Agentic ai, I heard yesterday this first term, I’m wondering if Joanne knows this term, the internet of agents I heard yesterday as a new buzzword. I’ll let her comment on that next. But Derek, Derek, I expected your hands to raise for AI governance, but here, tell me what it means for you to be a frontier firm or security expert.

Derrick Butts:

Yeah, a great question. When you look at this, I like Joe’s analogy about those out front are the ones with Arrow that cha me. From my perspective, I look at those frontier firms, yeah, they’re going to be the ones that are leading the innovation, but they’re also setting the pace for innovation. You’re going to look at what are the risk involved with that. I think also the companies need to really be fully engaged to look at the champion, the adoption of this from a security perspective. Nobody wants to jump into anything new that’s going to make them spend more money to mitigate threats to risk or of that nature, but also look at how the cultural can change to work with it. As mentioned earlier, the value case. What’s going to be the value to the overall organization? How’s it going to improve ROI, how’s it going to improve business processes?

How’s it going to improve anything that’s going to help make the business that much more substantial in the marketplace that it’s trying to dominate? The pace at which they move is really, Raj mentioned, it’s stellar, it’s a rapid pace we’ve never seen before and a lot of companies have stumbled into unfortunately where they have been mitigated or have been issues where they’ve actually rolled things they should not have. And I think we need to really look at more of a resilience mindset of the AI lifecycle adoption for this. How can we maintain a move in this frontier pace in a secure manner, but also maintain the principles of formulation moving forward. And I think there’s companies are out there, but you got to have the heart to do it because like I said, it’s not an easy task for this, for those that can really sustain that momentum that those are going to really move forward, be a premier.

Isaac Sacolick:

Thank you Derek. It’s very interesting to think about that balance about being in the front but avoiding the arrows as Joe called it. Joanne, your comments on the frontier firm and maybe a comment here. Dennis asked a question on the chat. What are the table stakes or being a company on the front of AI and around data or talent? I wonder if you have a comment around that.

Joanne Friedman:

I do. First of all, it’s about purpose. It’s about outcome. In the advent space, you use generative AI for creative, you use ag agentic when you want an outcome. So here’s an example of an outcome. Yesterday we were with a prospect, we were talking about our technology and our platforms, et cetera, and we used the meeting capture software, whether it’s auto or Fathom or choose brand. We then took that through that into a tooling kit that we have that we use for software engineering. And before we walked out of the meeting or got off the Zoom call I should say, we were showing the prototype of what the customer would want. So one hour from start to finish and they had a piece of software ready to use as a prototype that’s being a frontier firm that’s taking technology and leveraging it not only for purpose but for its value and showing someone this is what you can do with it.

Now, it was a very rough prototype. It was not anything ready for prime time, but it was the visualization of what they were saying, this is what we think we need, this is what we want to get to and this is the outcome we’re trying to achieve. Now we’ve taken those notes, we’ve reverse engineered what they’re talking about into something that they can actually touch and feel. To me, that’s what being a frontier firm is. It’s very bleeding edge. We know that in the agenda space, trust is a five letter word. To Derek’s point, nobody trusts agents right out of the box. And to the comment that you made earlier, Isaac, about the internet of agents, we commonly call it a swarm. We refer to buzzing of bees because you literally can run things in parallel. So what I’m trying to illustrate to you is not only that the frontier firms are those that are pushing the boundaries, but they’re being very clever as we hope to be about how they’re using the technologies of AI to drive outcomes without friction faster, without necessarily breaching the trust of the end user, but also using them as a way to build the trust in the capability.

What we heard from our prospect was that’s amazing. How did you do that so fast? And we showed them literally step by step, these are the tools that we use. This is how we could capture what we think you’re telling us if we’re wrong, no harm, no foul. If we’re right, tell us where the right parts are, tell us what you don’t like about it, and then we can iterate very, very quickly. So to the agile discussion, this is now accelerating agile to the point of delivery in minutes, not days, weeks or months. And that’s what to me, frontier meets.

Isaac Sacolick:

That’s awesome. Congratulations on your shift from POC into demos, live demos with customers. It’s awesome to hear the progress that you’ve been making and we’re going to have to invite you to do an unveiling here and talk about more of what you’re doing at a future episode. Let’s bring John in. John, you’re the person I love asking about, just emerging technology in general in terms of where the use cases are going. So what do you consider a frontier firm? And I want to go back to a question I just asked earlier from Dennis about what are the table stakes today maybe what is frontier firm ahead of?

John Patrick Luethe:

Yeah, and I think AI is different than almost any of the technologies that we’ve had in our lifetime. To me it’s like a foundational or infrastructure technology. It’s kind of like electricity, and that’s because it can be applied almost everywhere. And so I think if you think of it as a technology, it has to be on the same level as the internet in my view, it’s impact that’s going to happen to society. And so frontier firms to me, those are firms that are looking at everything they do, all the work that they do, everything they do for customers, everything they do internally. And they’re looking at what do we want the humans to do and what do we want the AI technology to do? And so the frontier firms, I think they actually literally looked at all the work that was going on in progress and they looked at what work do we need to stop so that we can reprioritize and change the direction of our company? And you can see that when you read the stock reports of companies. And so when you read the quarterly reports, it’s really clear or if you’re talking to the people in these companies, the companies that are technology companies and frontier companies, they are changing the direction of the company, the trajectory of the company, what the company is to align to this. I love that

Isaac Sacolick:

Concept and we talked about the need to say no and stop doing things that are at the dead end. I think that’s a good definition to bring in around frontier firms. And then what I wrote here is frontier firms reinvent themselves. John, I’m going to give you the last word on this and then we’re going to move into the topic around this concept of human agent teams.

Joe Puglisi:

Yeah, I think building on the comments that were just made, it’s really more of a cultural shift. It’s a change in the way the company operates from top to bottom and really across the board it’s more than just new tools or technology. It’s really a fundamental change in the organization. Some people who perform mundane tasks are going to be elevated or eliminated and other people are going to embrace this technology and be able to accelerate what they do and be able to do it better. So it’s really a fundamental change in the culture, the complexion and the nature of the way that the people within the business entity operate. And to Joanne’s point, all in the service of better or different business outcomes.

Isaac Sacolick:

Very cool. Thanks everybody. Let’s move on to our second question today. This is a term, I don’t know if Microsoft created it. It’s highlighted in their paper around frontier firms and it speaks to how we should be thinking about agents themselves. And I’ve heard a lot of descriptors around this. I think this is the one I like the most, neraj, this idea of human agent teams. I really like your description of human judgment plus machine intelligence around this. What are some of the examples of human agent teams that you’ve seen that net woven has worked with your customers and what do you see the business value being delivered around it?

Niraj Tenany:

Thanks, Isaac. I got to be honest, a couple of months ago or maybe a year ago when I heard of the term agent and I read about it, to me it sounded like a sub routine in a program. If you remember, for some of us, I grew up from the programming ranks and used to write subroutines. It was no different to me. So I kept digging in to find out, okay, what’s different about the subroutine? Why are we really calling it an agent? And over time as things began to mature, it was very clear that we started from subroutines back in the two thousands. We had what we call the calm DCOM agents that what were managed now within are the age of agents, which are nothing autonomous entities and they have the mind of their own now obviously, and of course with human led interactions where they’re doing certain things and they’re learning to do things.

I’ll share with you as A CEO sometimes tell me that I should not be programming, but I actually rolled up my sleeves a couple of weeks ago and started to do white coding just to understand how this technology works and what it really does. And it was amazing to see how these agents, when you’re building a website or when you’re building an application, how these agents are actually working together. It was surreal. So being able to, I mean in our lifetime I could not have imagined that we are writing software and there is these agents who are actually testing the entire software and self-healing themselves and fixing the code. So there is a lot of activity happening from that area in terms of these agents doing things, but obviously be guided by humans. And I would encourage everyone actually on the podcast here to do some white coding, you’re going to see a whole new world of how agents operate.

Isaac Sacolick:

Can you go deeper on that, Raj? We haven’t used the term vibe coating here on the coffee hour. What is it? What does it allow you to do? What’s, go ahead.

Niraj Tenany:

Imagine that you had, you had a piece of software and you could actually engage and talk to that software in English and English is your new programming language. So let’s say that you wanted to build a website, I said for a new website for your company, and you could utilize this white coding concept and basically do this emerging category of software. You could tell it to say, Hey, I want to build a website similar to this other website that I have, but here’s my unique content. And you will see that software actually goes, collects all the information. Does the grunt work of building information, creates the search engine optimization for your website, adds lead generation magnets to, it creates content for you so that you can post content on a regular basis. This is a very good example of how agents are actually bringing an idea to light in a matter of minutes or hours as Joanne mentioned earlier, that the acceleration and the other panelists too, the acceleration is happened rapidly and wiving is another modern way of doing it, which actually takes advantage of these code agents that are out there. Some of it will be your own, some of it will be from third parties.

Isaac Sacolick:

Very interesting. Somebody asked me about this a few days ago about how to picture agents and what you’re describing as vibe coding from the ground up. And when I said is if you know what an API is and you know that most APIs are programmed through endpoints and through JSON, now think of an API that takes English as a front end and you can speak to the agent and ask it what it does and ask it to start doing things for you because now the agent is connected to your operations. Now take this one step further and we talked about orchestrating services through API calls and building workflows around them. And workflows come from the generation of rule-based systems, linear based systems. And so we have a set of inputs, a set of processes along the way, a certain set of things that we’re trying to achieve and we build a workflow around it, part of it with people doing steps, part of it with machines doing steps.

Now we turn it upside down. Now we turn it into a less deterministic approach and instead of programming a workflow, what we’re enabling is role-based activities. It’s almost like you and I walking into a room and there’s an architect there, there’s a developer there, there’s a tester there. They’re all playing their different roles as agents and we’re having a conversation with them and saying, this is the type of thing that I’m trying to build. And they’re having a conversation among themselves as there are different agents figuring out what they’re doing and orchestrating here is how collectively we can respond to your question. It’s really interesting to watch because if you watch the logs of that discussion, right, the logs of an API are these observable JSON rich streams that you need technology to go understand what’s going on when you look at it from an agent to agent conversation, it’s happening in English and you can see all the conversations these agents are having that are leading to them to come back with what activities they’re recommending and that they’re actually actioning for you. It’s very interesting. We go around the room and we’ll come back to Niraj on this. John, we’re talking now about human agent teams and their business value. What are you seeing out there?

John Patrick Luethe:

Yeah, and I love the visual of the sober teams and I always think about the matrix and the characters in there because the characters in the matrix are running sober teams as agents. And what I’ve seen is my friends that neat tech companies are building agents that have specific purposes. One of my friends is at Pioneer Square Labs, Kevin Runway, and he built an agent that helps with software development and he interacts with it via Slack and GitHub and it can do pull requests and review code and you can give a text and it’ll create requirements. Almost anything that an intern would do that’s a personality, he gave it in software development. It’ll do those tasks and they communicate with each other via the normal software development tools. And one of my other friends is at a phone company. I want to sell your phone companies and he’s on the compliance team and they’re using all sorts of generative AI to help make sure that people have the right permissions, that they don’t have too many permissions.

And so these things have tasks to reach out to managers and say, Hey, does your user have the right permissions here? And it’s really helpful when you have a couple hundred thousand people on an org because you can go through the whole org on a quarterly basis and ask every manager if the people have the right permissions and hint, Hey, this person has access to this thing, but we don’t really think it should. And so that’s what I’m seeing with my friends. Just really neat agents that have specific purposes that are acting on their own to make the things better and do tasks that people don’t want to do.

Isaac Sacolick:

Very cool. I’m really interested, Derek, what is the security version of a human agent teams? I mean the data is bigger, faster, more complex, the bad guys are bigger, faster, more complex. What does a human agent team look like in security? Absolutely.

Derrick Butts:

So you’re seeing this already in some of the architectures and they call it human agent architectures, human AI agent architectures, and you see ’em using it in Microsoft Darktrace, CrowdStrike, rapid seven. They all got what you call these threat commands and these direct commands are doing just what Rob said, they’re actually using artificial intelligence and analysts to review these anomalies or things that come across their screen to double check to say all these threats are anomalies real. And to prioritize those threats based on the human analysts investigating and say, yes, I can validate this. I think it’s going to be something we’re going to see more throughout the ages because people need to become confident and trust the systems that they’re using when these a AI are spit now information to make sure it is valid, to make sure this is substantial. But what it does from a value position, it helps ’em make faster decisions.

It helps ’em analyze that threat and say, yeah, this is a threat, and help them escalate to what needs to be done to mitigate that threat. It’s also going to help them because of the fast and the speed at which they can do it, reduce costs so that particular customer that they’re working with, let them know what needs to be done, isolate container and what fixes they need to do to put in place to make that happen. These things with the key thing is the AI agents working to increase the accuracy of these detections because with the AI packs are going to come faster, they can be more complex and the human alone just can’t work with that. I’m also seeing that just in customer service type services. You see this now where a lot of customers are moving from their online human interface that now have chatbots, AI chat bots that they’re now interacting with directly using the voice command streams and it makes it harder not to even get an agent, but there’s always one available in the background. So it just depends on the industry you’re working for, but from a cyber perspective, they’ve been using it now for at least a good year and they’re trying to perfect it to make it better. And again, those customers entities or those businesses that moved in this area are going to be leading in this particular area because that’s what we need. We need that security, we need that trust, we need that reliability to make sure they are working together until that system can become more reliable and be self sustained.

Isaac Sacolick:

Thank you, Derek. Let’s just keep going around the room. Joanne, we’re talking about human agent teams. There’s also a question here from Barat on the common stream and he wants some examples like who should we use as a, who should we personify? What company or companies might we use as a way of saying this is what a frontier firm might look like, Joan, you have any examples of that? And then we’ll bring Joe and then Niraj to see if they have any examples,

Joanne Friedman:

Companies that are frontier organizations or those that what I call humanify their ai. And what I mean by that is an example of a company that unifies is one that understands that agents learn, they’re constantly evolving, constantly changing and adapting, which is great, but they also need to be trusted and they also need human in the loop, meaning guardrails that are assigned based on the role or the persona of the individual that would be asking a question or looking for an answer. So in human in the loop, what you’re doing is you’re giving the system an opportunity to leverage institutional and tribal knowledge of the workforce. That’s the in tandem human agent experience that I think a lot of frontier firms are trying to go after. We do it from the perspective of a manufacturing enterprise. So you would have as a persona, a worker on a plant floor, you could have an engineer, a controls engineer, you could have a plant manager, a supplier, a customer, any of those roles.

The companies that are really excelling at this are those that understand that human in the loop doesn’t just mean having a human overseeing the AI from the trust perspective, it’s drawing in the humans into the loop of what used to be called a workflow and allowing them to have the adaptability to use the tools as they need on the daily basis because we all know, particularly as executives, you change hats every two minutes. You’re constantly doing things that involve a different role. If you’re tied to an old school method of what role means you would not be permitted from a security perspective or from a business perspective to see information that actually adds the context and the nuance that you need, the semantic layer, if you will, that makes AI work.

Isaac Sacolick:

Thank you, Joanne. I actually like that definition of looking at that partnership and I think one of the reasons that we don’t have a lot of shiny examples here. We talked about AI and customer experiences and in products a couple of weeks ago, maybe it was last week, and we’re seeing more AI and agent AI being used on the inside on future of work, on workflow, on the concept of virtual agents. I think that’s where we’re seeing most of it. And so I’m going to wait for Naraj to have some examples of where people are really excelling at it. Folks, before we go back to Niraj and Joe and our question around collaboration on AI governance, I just want to thank net woven for sponsoring today’s episode. Net Woven is a trusted Microsoft solutions partner who leads organizations with their AI journey. We help organizations build AI powered applications, unlock data for insights and protect their organizations from cyber threats. You’ll learn more about net woven@www.net woven.com. And thank you Niraj for being such a strong supporter of the Coffee with Digital Trailblazers. I’ll announce our upcoming episodes at the end of our session. I want to keep going with our conversation and let’s bring Joe back. And Joe, I want to hear what you think about a human agent team and then we’ll bring Niraj back. We’ll start talking about governance as we’re starting to balance being on the frontier but not breaking things. Go ahead Joe.

Joe Puglisi:

So a couple of months ago I was fortunate enough to get a peek behind the curtain at a major consulting firm and the chief technology officer had embarked on this project to replace specific roles individuals in the organization with agents. But what was interesting and I found incredibly unique and exciting about the architecture was the network of agents, or as Joan said, the swarm of agents were tied together with an interface which we’re familiar with and it’s called English. He had architected it such that every time you replaced a human in the loop with an agent, the interface didn’t really change because you would pass the information along in English. So it was humans to agents and agents to agents and to humans, all in English. I found that fascinating.

Isaac Sacolick:

Yeah, it’s very site bias. It just feels like you’re in an episode of Star Trek actually, except that there’s actually a visual of what’s happening behind the scenes and how it’s working. My concern around this is that we talked about shadow it and now it’s not just it, it’s shadow ai. It’s really potentially the shadow of the swarm of shadow ai, all these agents and all these platforms. I’m actually writing an article around this today that will probably come out on Monday. Go ahead, Joe.

Joe Puglisi:

But thank guys. You think about how easy it is to implement this. You have clear vision into who’s saying what. You have complete audit trails and there’s no major culture change. It’s just that you’re now talking to agents and agents are talking to agents instead of just people talking to people.

Isaac Sacolick:

So let’s bring Neraj back. I want you to comment on this conversation. Who’s doing this well? Or at least what are they doing well? What are some of the examples that they’re using agentic AI or they’re using vibe coding and starting to really see the value from it. And then let’s go on to our next area. How do we do this reliably? How do we do this with security in mind? How do we close the collaboration gap between our stakeholders and enterprises between risk, legal, security, it, and make sure that when we start becoming a frontier firm, we’re not getting the arrow stuck in our back?

Niraj Tenany:

Yeah, great question, Isaac. We are based, I’m based particularly here in the Silicon Valley, and as you know that everybody lives and breathes technology here. Every other person next to me is probably has a startup or something doing something or the other. So sometimes I live in this area where there’s a lot of tech hype and historically Microsoft hasn’t been first to the party of any tech. They wait for it to become a billion dollar category and then they jump in and soup the category. But this time with Microsoft, we are seeing very different energy. They are first to the party everywhere. When I went to customers in the past, it was everything else being discussed and then Microsoft would come later, but now conversations with the CIOs are happening with Microsoft also embedded in there. So that is great news for companies like ours and Microsoft has a great deal of penetration and connection with the enterprise segment, both small and large enterprises.

So when we are working with customers, we’re seeing, I just mentioned in the chat that there is is a process that we recommend to customers, which is the crawl, walk, run, fly journey. So different companies are in different phases of their journey, but if one is in the crawl phase, we typically recommend organizations to use non-risk business areas to test them out, to get the feet wet and really get going on their journey. So the initial application that we work with customers and what we are seeing is more low risk areas of internal business functions. Then people go to the walk and the run journey and we see them utilizing them more for critical business functions like engineering, like supply chain, like sales and marketing where there is more customer supply interaction and the business value and risk is also higher. So that’s sort of what we are seeing in organizations as they are going down this journey.

Isaac Sacolick:

I’m seeing very similar things, but I am seeing more firms talk about getting into run in fly mode. They’re using AI in areas like customer support and customer care. They’re not making the same mistakes of trying to do fully autonomous, but rule oriented chatbots. Those lead to very frustrating experiences. And what they’re doing is making very, very smart human agents working with a virtual agent, very much what you described earlier in our conversation, this sort of human judgment plus machine intelligence. And so when you speak to one of these agents, they know who you are, they know what you’ve bought, they’re listening to what your problem is or asking the agent about what are some of the options to present to you as solutions. It’s becoming a much different experience and a much more controlled experience. Niraj, before I go to the floor, talk to me about this idea of collaborating on AI governance and how do we make sure that as we’re building frontier capabilities, we’re not opening ourselves up to more risk?

Niraj Tenany:

That’s a great point. Great question, Isaac. Back in November, I gave a talk on AI security and governance at the Microsoft campus, and I got to tell you that with everything governance is so critical. I think it is the only glue that keeps the environment, if you will, in a sane order and keeps it well structured and formatted. With ai, you’ve got security risk coming, you’ve got compliance risk coming, there’s a number of risks coming and having proper governance structures in place is important. Now, governance with a big G is often not taken well with organizations like a lot of people think of it as a set of documents being that is overly imposing on how innovation occurs. So governance also, people don’t have to really bite the full thing as you get started. You can always do a crawl, walk, run, fly journey on governance as well. Have the foundations in place work through your AI work in establishing whatever your AI goals are and iterate and develop that model. We’ve seen that implemented very, very successfully. And the rate of acceleration of creating governance in our organization also increases because there are advanced tools out there and there are things that you could not have done before in governance. You can do them today.

Isaac Sacolick:

I agree with that notion to fly on governance also, we talk about this a lot here about how governance is just a very difficult word. It’s often just associated with policies, one and done policies. We know AI is something that’s evolving. Niraj, my keynote that I’ve done for you in February that’s being updated for September. I think you can’t separate governance from strategy when it comes to ai. I think they need to be intermingled particularly to get people’s interest and to connect value to what guardrails that you really need to have in place. And I’m going to add a third dimension to this as we’re starting to do more with agent AI and building AI agents across different platforms, I don’t think you can separate operations from it. I think it needs a complete holistic definition. I don’t think you should have your policies up in front and how it’s implemented and managed on the backend for nobody to see. I think it’s just too much of a core of what organizations need to move to over the next few years that you have to have strategy, governance and operations altogether. Joanne, I’d love to hear your opinion on this.

Joanne Friedman:

I think a holistic perspective is absolutely mandatory. That would be the table stakes for anybody going down the AI route, regardless of whether it’s gen ai, agentic, ai, math, ai, physics, ai, it doesn’t matter. You have to take the holistic approach. And I think the two words that are going to emerge very strongly over the next year, particularly with respect to governance, are providence and lineage because we need to understand where did this data come from, its providence and where did it move to and what impact did that have Its lineage over and over again as we’re going forward. Because the table stakes will constantly be evolving because AI will continue to evolve. Where we have to wrap our minds around the fact that even though data is constantly changing, context is now changing even faster and the context in which we use AI is changing faster.

So what was designed for a static system, meaning an IT system or as part of the governance or risk or any of those kind of words that were policy-based have to be as flexible almost to the level of vibe coding as the rest of the AI system. So it’s going to be a challenge to do that. Providence and lineage give us the direction to understand how to iterate and also give the systems meaning the agentic tools that we’re using, the opportunities to learn and evolve to keep up with what will be a continually changing environment around compliance, regulatory and even security.

Isaac Sacolick:

Thank you. Joanne. I agree with you. A big part of this is understanding the elements of data, particularly providence lineage security. And much like people have access rights to information, we need to make sure that our agents have the right access rights, much like we secure data for different contexts. I want to hear from Derek from Raj when he comes back on the concept of virtual data rooms and we put our agents in a room where they are sanctioned to the certain level of data that they should be getting access to. Derek, what are your thoughts around this? How do we bring these different factions together, our risk, legal, security and IT teams and really build a holistic AI strategy, governance and operations around it?

Derrick Butts:

Well, I think Joe mentioned it perfectly. They have to communicate. So one of the things is looking at establishing either a business or cross-functional AI governance board to really look at what are those risks, of course all these particular areas and how those translates. So when you’re looking at the business risks, the operational risks, the cyber risk, the legal risk, all those things come together. You need to understand from, as Naraj mentioned, the lowest risk impact to the highest risk impact. How are those going to translate to business impacts overall and how can we foster and understand it across all the organization? Because like I said, it is an organizational challenge. It’s not just one particular or couple areas. And I think once we get everybody involved to understand that it’s going to be easy to work with the compliance need to work with the resilience, you need to work with what is it going to be a recommendation to mitigate even innovation strategies to make that work. Everybody has to have some sort of ownership within this particular AI governance to make sure they understand how it’s going to affect the dominant effect, the change across the department. And the other thing, the educational piece of it’s going to be huge. This is something that’s one and done. We have to continue to educate the people on the changes that are going to take place with an AI and all the different risk factors to look for the tabletop exercises from AI perspective are going to be key.

Isaac Sacolick:

Yeah, I agree with that. I was chatting with A CTO yesterday on a panel and one of the prominent nonprofit groups, they’re using AI for grant writing and for nonprofits, and they’ve got a ton of PII information in there and a ton of financial information in there. And the number one thing they have to do before doing anything around AI is just educating 300 people about what happens if they start cutting and pasting this information into open AI out there. Exactly, exactly.

Derrick Butts:

And having that conversation at the leadership level I think is also going to help escalate the way we can work with compliance because the leadership means to understand how the ripple effect is going to affect the overall organization. And I don’t see enough tabletop exercises happening enough, especially at the AI level.

Isaac Sacolick:

Thank you, Derek. We’re going to go to Joe and John and then we’ll go back to Raj. Hello Joe.

Joe Puglisi:

Well, Derek brought up my favorite term communication. I think as he said, he is essential that you get everybody involved in the conversation across departments. You need a cross-functional team and you really want to be as the leader, as the trailblazer. You want to be the head of the department of no KNOW, not the department of no and no lead guide, discuss. Don’t try to stop it. You can’t try to guide it and make sure that everybody is involved in the conversation from day one.

Isaac Sacolick:

So you’re going to write that blog post for us, right? The department of no versus the department of no.

Joe Puglisi:

I think I did maybe in my, I’ll have to go back and look, but if I haven’t, I’ll certainly fill that one in.

Isaac Sacolick:

No, it’s a great concept because I remember doing my first AI governance keynotes a bunch of months ago, maybe it was a year ago, and I was getting the nos mostly from either risk and data governance later. They just didn’t know how to manage the underlying situations. And I think this is really about educating the employees. I think it’s about setting up environments so that people can work. I think it’s about setting up strategy so people understand what areas of focus to really do their experimentation on. John, why am I leaving out?

John Patrick Luethe:

I was going to say, the thing that really shocks me is how many companies would have all these really careful processes and then when AI came out, they started using outside technology with so little testing and they completely bypassed the good practices that they had in release management and change management and things like that. And so the one biggest, the thing I would tell everyone is anytime people want to use new technologies is they can take a step back and try to understand the fundamentals of things I think are going to be in a better spot to change management to release this technology and what are the risks aside with it. Taking a step back and having those conversations I think is what’s really key when you’re releasing technology. They could have all sorts of, you could be sharing information, it could be saying things you don’t want to it do to the public.

Isaac Sacolick:

Yeah, there’s some great examples here. Fossil, thank you for sharing. In the common stream, he talks about his crawl and walk philosophy around their AI journey. He’s got a bunch of examples here around what they’ve been implementing around contract management, customer contact center, fa, thank you for sharing with that, Niraj, I want to bring you back because we went from being able to tell employees that using the open public large language models bad idea, here’s why from a data perspective, from an IP perspective. And so then Microsoft came back and said, okay, you have your data here, you have your applications and your SAS here. We’re giving you copilot or making it available. We’re giving you the ability to do it with code. But I get the sense that that’s, we have to take it a step further. Right now we’re getting these agents operating in a, let’s just call it firewalled off environment. What else do we need to do to implement AI and governance strategy so that agents have access to the right data?

Niraj Tenany:

Yeah, thanks Isaac. I usually don’t talk products in these events, but in this particular instance, I’ve got to bring this up. If you don’t know Microsoft Purview, you have to look at it. Back in 2003, Microsoft came out with SharePoint and not many people looked at it. We jumped on it and built an entire company around it. And if you look around yourself, all the competitors of SharePoint are gone. Microsoft has done another home run with this product called Purview. It brings many things together. And the reason I’m saying this is because that is the foundation, therefore a lot of AI driven data security and protection. Okay, so I encourage you to look at it. If you need more information, I put my email on the chat. So back in 2020, IT world, one of the world’s largest semiconductor company reached out to us to build a data program protection program.

They were paranoid about protection at that time. I got to be honest, I had no idea about the AI wave back in 2020, but we set out with this company to build out their data classification, data protection scheme. It was an arduous grueling exercise. We built the entire thing for the organization. Customers, half of them were happy, half were not happy because of the growing exercise. But I get calls from the same company now thanking me for the comprehensive data protection we have built, which is coming out to be very, very valuable in the age of ai. Out of that, we actually spun up two products that we built. So network is also a product company one, we have one product around data security. You all may have heard the term virtual data room. Virtual data rooms were primarily used by organizations to do mergers and acquisition and capital fundraising.

When we set out to work with the semiconductor company, we asked a question that why is virtual data room only used for these two use cases? Why is it not used to manage, protect intellectual property for auditing for customer collaboration, secure collaboration for secure supply collaboration? So we ended up building a new category called secure collaboration using virtual data rooms. So our product, check it out on http://www.gothreesixtyfive.com. I’ll put that in. It allows you to create these data rooms where you can secure your content and organizations who are using that product are freely using artificial intelligence because that data is secure. If anybody uploaded that data into any of the degenerative AI tools or want to utilize it for agents, they will not be until they have the permission. So I just wanted to close out to mention that data security protection is paramount for the rapid adoption of generative and agent ai.

Isaac Sacolick:

So I love hearing about these solutions. I remember when I first saw Purview, for example, I was like actually advising a client at the time and just saying there’s some other things we probably need to get to before we get to something like purview. Knowing all the work around data ownership, data classifications just can be an arduous task, and I saw recent implementations of companies doing it. It is a heck of a lot easier. It’s a heck of a lot more automated. It still needs a lot of guidelines. You bring together your it, your data, your business folks to be able to do this. And we talk about bring in a frontier firm. I’ll share one of the recipes that I talk about in the AI governance keynote that I use. I’ll be doing this in September. In September in Atlanta. It’s on September 10th. So do reach out if you’re interested in participating in that.

And I talk about being able to do the things that frontier firms do, which is not be on bleeding edge, be on the value edge, being able to test things, being able to be top down strategic around areas that you really want to find AI capabilities to fit your goals and other areas where you’re going to empower your organization to do experimentation and find where AI is delivering value. And you’re going to use a very agile approach. You’re going to bring the folks who are really focused on the innovation in parallel, collaborating with your risk and your legal and your data security teams and saying, you know what? We’re going to do some security upfront, but we’re going to do a lot of security and parallel to our innovation. And I think that’s what frontier firms are all about. Niraj going to give you the final word on this.

Niraj Tenany:

Thank you. A second. Thanks everyone for to the attendees as well as the panelists, this is an exciting journey and with everything new starts with fear, anxiety, anxiety, and then towards excitement. I can tell you how excited I am and how others are. My kids tell me that the dad has gone crazy because he is working till one or two in the night and all they hear from me is about ai. But the reality is that this is an exciting journey to be it, and we all should embrace it. Whether you are a CIO, whether you are a business analyst, I encourage everybody to play with the technology, the amount of ideas, innovation, grounding your thoughts into reality. You will experience that as you play with the technology. There’s just a sea of game changing activities happening. So I want to thank again, I look forward to seeing you all at the Atlanta event, whoever can make it, and if you need more information, I put my contact information in the company, information in the chat. Looking forward to seeing you all soon in another event.

Isaac Sacolick:

Yes. Thank you. Thank you for a really lively conversation. Hello, Greg. Thank you for this comment on data management. AI governance are critical enablers for the public sector’s AI success. Really interesting conversation to bring together both Frontier and the governance that’s required. I want to thank net woven for being our sponsor today. Start your AI journey with net woven and become an AI first business, operate with AI agents as digital teammates, empowering human and AI collaboration at scale. Please visit net woven.com to learn more. And again, I’ll be speaking at the Microsoft Center in September, September 10th with net woven. As our sponsor. Do reach out to either Niraj or myself. If you’re interested in attending our upcoming coffee hours. You can visit StarCIO.com/coffee/next-event. If you ever get lost at always redirects to the upcoming events. We’re talking about AI era transformation next week on agent versus AI agents, large versus small LLMs.

That will be next week. On the first, we’ll be talking about strategies for AI ready data, turning data landfills to business gold mines. That actually came up last week and I wanted to get that conversation going really quickly. And then on the eighth, slightly off of ai, we’ll be talking about DevSecOps, risk non-negotiables. We’re talking about reliability, innovation, security, and culture in the new era of DevSecOps. And that will be on the eighth. Folks, thank you for joining a very lively event today. I hope you will join us in upcoming weeks. Everybody have a great weekend. Thank you again, Niraj for joining us. Thank you, Derek, Joanne, John, Joe, Liz for being our expert panel today, and I’ll see you all here next week for our next episode of the Coffee with Digital Trailblazers. Everybody have a great weekend.

Joe Puglisi:

Thank you.

Leave a Reply

Digital Trailblazer Community

Isaac Sacolick

Our community of Digital Trailblazers are for leaders of digital transformation initiatives, people aspiring to tech/data/AI leadership roles, and C-level leaders who develop digital transformation as a core organizational competency.

Review the Community Guidelines