Close this search box.

Infogov, Lip Service, and the Problem With AI

Share This Post

Information governance continues to be an important, yet often ignored and/or misunderstood issue, for organizations. At heart, it’s not “just” records management, but should involve an overall strategy to be able to maintain, archive, access, and remove (as needed) information. In this Two Question Tuesday, Steve Weissman talks about why companies should deal with information governance issues and expresses serious concerns about the role of artificial intelligence in the context of IG.

Steve lives and breaths #infogov as The Info Gov Guy at The Holly Group and has a decades-long track record of helping organizations get information governance right. You can find Steve on LinkedIn.

Like what you hear? Subscribe to our channel, but also consider subscribing to DIR. Contact Bryant at [email protected]. Single issue subscriptions can be purchased by clicking here.

Today we’ve got Steve Weissman who is the Info Gov guy at the Holly Group. Steve and I have known each other for quite a while and we will try to keep this to two questions, which if you’ve seen us talk is almost impossible, no guarantees. We’re going to talk about information governance today obviously.

So first question is a very basic one when you’re probably tired of answering; Why should companies stop giving lip service to information governance?

Weissman: I would almost take the counter question and say “why should they start?” I say that because in my view there are far too many organizations right now who aren’t even really thinking about it. Lip service is, of course, not nearly enough.

Most organizations have information issues. They can’t find stuff or they know it’s in there, but not where it is. How do we get to it? Or we have duplicates. These are not new issues, but they’ve gone on for so long that a lot of people think about it as just sort of a chronic pain, like tennis elbow. You don’t go to the doctor because your elbow hurts. But while you’re at the doctor, you put your shirt on, then it and you and you get a twinge. The doc says, hey, what’s up with that? And then you can get it treated.

I would like to see more organizations start by paying lip service to at least say we have an issue here. As the flow of information continues and continues to accelerate, it’s just getting worse. Regulations are changing. If you’re in the government space, the Freedom of Information stuff is a driver.

In this information business of ours we love to talk about how to keep from being sued, but there are positive reasons to do all this stuff as well. While I’m a big proponent of fixating on the business problems, there are positive reasons to solve information governance problems, such as finding information faster to make a quicker decision or having a high confidence that the information you’re basing a decision on is of good quality.

To the spirit of your question, organizations should start doing more than pay lip service to it because it’s a real business issue and it’s only intensifying. But if you’re not thinking about it, I think it’s high time you start and at least get the lips flapping within your organization.

Records people get this. It’s the same basic discipline except the way I define it you want to apply all that records goodness to all the information in your place, structured and unstructured paper and electronic. So the first step is to start talking about it.

There’s nothing like asking a question in the wrong way! This is going to be quite the segue because we’re talking about companies not paying attention to information governance and now we’re going to talk about AI in the context of governance.

Artificial intelligence is smoking hot right now with ChatGPT and everything else. How could AI help solve information governance problems or challenges that the current tools struggle with today?

Weissman: I don’t know that it can. That’s all! Thanks for coming. Good night everyone.

What drives me crazy about AI is the amount of time we’re spending talking about it in the context of doing information right. No, I’ll scratch that. All of the things that I have seen in the information space that is described as an AI application, they’re not. They’re just not. It’s great use of machine learning.

I’m not really intending to split semantic hairs here because they are different. I broadly think of machine learning as basically self-learning. You give it stuff to study and it learns from it and gets better at things. Whereas AI I think of more as self-teaching, it’s different. It’s making those connections independently (theoretically) instead of being fed it.

I can’t think – sitting here right now – of a worse idea than to apply AI to information governance, because information governance is, at its core, a human function. Somebody has to decide “what is the retention schedule on this document?” How long do we need to keep it around by statute, by policy, when does it become a legal liability?

If it’s a draft, you need to save the draft. Which one’s the official version of the thing?. Those are decisions to be made by humans. Once those decisions are made, Yes, there are plenty of tools on the market right now that work really, really well. Even to automatically classify buckets of information to determine where does it fit and which policies apply. They are awesome.

They make mistakes. Often mistakes happen because it’s all probability based, right? So you can set the threshold again. Then the question is “How accurate is accurate enough?” The real beauty is when an infogov system lets the people focus more on the exceptions than the totality, and then you can automatically have the system run automatic reports.

Let’s say, hey, in the next 30 days these things are expiring or hey, you know there’s a duplicate document just got checked in because we looked at the hashes or whatever and we think it’s a dupe. Or it’s a PDF version of this word doc or whatever. But that’s not AI, not the way we’re all ready to run for the hills from it.

I have concerns about that certainly on a social level.

Do we want ChatGPT writing our policies? That seems scary.

Sure, they’d be current up until the last date of the last document the tool is trained on.

Weissman: Yeah. Maybe you’ve done this. I hadn’t until about it, but heard someone mention it. I had ChatGPT write a biography of Steve Weissman, from Waltham, MA, baseball Little League coach. All of which are things associated with me. And it wrote. I have to say it was fairly well written. Bu I had no idea that somewhere along the line, apparently I got a degree in computer science.

I’ll do it again, and this time it’ll be more specific. Write me a biography of Steve Weissman, the Infogov guy governance. And it was better, but still fictitious. So I don’t want this thing determining what type of surgery I need or what disease I may have or writing my privacy policies.

I do see these things  running amok because they’re based on human inputs and human inputs are notoriously faulty. There are built in biases. I’m not gonna belabor the point: I know it’s two question Tuesday, but you didn’t give me a time limit.

So I think the issue surrounding AI is a big one and an important one, and certainly worthy of the conversation, but honestly, not in the context of information. I just don’t see it, Anybody who wants to go down that route, but please don’t hurt me because I don’t trust what you’re going to get.

Well, thank you. As you just alluded to, we could probably go 24 questions with this AI and information governance and you’ve spent decades on this. So I’ll just say thank you. Steve it is always good talking to you.

Subscribe To Our Newsletter

Get updates and learn from the best

Latest Blog Articles