Introduction
AK: Konnichiwa. Welcome to the AI Automation Dojo. The show where we look at the modern project management and wonder if the gun chart should just be replaced with a Ouija board. Today we are winging into the treacherous waters of AI project delivery. You know where the project plan is more of a strong suggestion and that deliverables occasionally develop a mind of their own.
To guide us, we have brought in a special guest, a real life delivery manager who has stared into the AI abyss and lived to tell the tale. We will be talking about projects where the requirements are a moving target. The key component, the AI model, is basically a black box with a “here be dragon” sign on it and success is often defined as “well, it didn’t set the server on fire this time”.
I am your host, Andrzej Kinastowski, one of the founders of Office Samurai where we believe the most important part of any project plan is a clearly marked emergency exit. If you have ever tried to explain a confidence score to a room full of executives who just want a simple yes or no answer, this episode may be your therapy session. Now, grab your favorite katana or a very detailed risk register and let us get to it.
Today, we are joined by Dagmara Sysuła, who is a delivery manager at Office Samurai. She joined us four years ago and she has become a person that basically handles all of our delivery, giving me time to do things like podcasts. Dagmara, welcome to the podcast.
DS: Hello, Andrzej. Thank you for inviting me here.
AK: We will be talking about AI projects. Tell me why are you the one that I am talking to about AI projects?
DS: Because we are trying to deliver AI projects and I think that this is very important now to know how to do this or just to try to do this in some way and to be aware why it might be difficult or why it might finish with a success or not and which way is best to provide AI projects. I think that the AI, actually the podcast is the proof of it. It is a very hot topic right now and everyone wants to touch this technology.
Project methodology: the necessity of a hybrid approach
DS: Somehow, but we do not know yet how to do this properly sometimes. We try to do this in the way we know already, the methods we know with the methodology we know. However, it is not such easy as we thought before.
AK: So how does this differ, because I also have a feeling that those projects at least at this point in time are quite different from what we are already doing? How or why are they very different from say a classical RPA project?
DS: Let us say that in general in IT we have mostly those two ways of providing projects, of managing the projects. This is the waterfall way when you have everything structured, where you have milestones, when this is predictable what is the next step and RPA definitely is one part of waterfall. Why? Because this is usually short project, maximum 300 hours, and sometimes many times it is shorter. And you have the very strict steps: analysis, development, testing, deployment and maintenance.
In agile it is different because you are working in the sprints, you discover something, you are checking, making retro and going in those kind of cycles. But it is also somehow predictable because you are providing some project. In AI you need to be a bit more focused on experiments, that is why agile is definitely the best way.
However, you still need to manage somehow with your budget, for example with your stakeholders, with your sponsors. That is why you also need to look on the waterfall as well. That is why we need to mix it a bit and just to find out a new methodology. Probably for now we are calling it like a hybrid, for example, because we are experimenting. We use experiments in an agile way and actually here the part of agile Kanban is absolutely perfect because Kanban is used mostly in maintenance projects. The scoring is before the deployment and for AI I think this is very important moment to be sure that this what we prepared is ready to be implemented.
Of course you are also known to this as a huge waterfall project, just in small pieces. What I would say from the Office Samurai perspective, we implemented few years ago retrospective to all of our projects we provide to our clients.
The crucial role of retrospectives in AI projects
DS: Your project will be finished with retrospective and what is very, very important in the retrospective and I like this part from the agile methodology very much—we are sitting together, the delivery team (because we have analysis analytics, we have developers, leaders, architects, whoever touched this project). We have a space for talking very honestly about everything what happened, about the good things, bad things, what we learned already, what we did wrong during this project and what, and very, very important, something I really love is the actions. What we can do better or different during the next project or what we learned already and we can implement in the future. I think that in AI projects this is absolutely crucial to have some kind of retrospective after the end of the project and just to know what we learned because mostly all of those AI projects might be experimental or just discovering something new.
AK: I think retrospective is something that when you introduced it into our organization, it was a little bit hard for some people to do at the beginning. For the projects where things were maybe a little bit more hectic, when something went wrong, it is extremely important that we do this because otherwise we do not learn from those mistakes or maybe problems that we had. When it comes to AI projects, because they are kind of way more new and way less predictable, this makes those summaries, those wraps up so important.
DS: It is also important to split the project on the smaller pieces and to make this retro after this small first circle or the second. This is difficult sometimes in AI projects because those as you mentioned those are not predictable. Sometimes we are thinking that we have some kind of plan but it does not work or it is changing or the technology is changing and we need to change the way to provide the project. That is why it is also important to have this pit stop and to give some kind of feedback to the sponsor. The checkpoint, the retrospective, it is perfect for AI projects.
AK: Especially when things do not work as we expected them to. We have some recent examples of projects where we started quite a long time ago. As painful as it is, sometimes we need to stop and ask ourselves, okay, maybe it was at the time when we were starting it was the right choice to use this technology. But in the meantime, we have a different one. Maybe we do need a bit of a remake. Maybe we do need a step back. Because AI technologies, and not only Gen AI, those AI technologies that we use so much for like UiPath’s document understanding or communications mining, they are evolving really fast too. I think it is completely right that you do not only do a retro at the end, you need to have those stops during the project to think, okay, are we doing the right thing.
DS: I think it is very important. The need of having a retro is coming from our developers firstly. For me it is like the part of the culture which I was trying to run, it is absolutely fantastic because this is just the understand the project needs and they understand the project flow. It is really amazing that it is coming from the team.
Mixing methodologies: the necessity of waterfall for sponsors
AK: You mentioned Kanban which is a very basic tool. You mentioned the retros. What else? Are there any other pieces of tools or pieces of methodology that you think help AI projects a lot?
DS: I mentioned before about the waterfall and I think that we absolutely need to include it because we need to communicate somehow with the sponsors. The people who do not understand the AI project or just the technology because it is complicated. You need to have a very great technical understanding to know why it is happening.
But the sponsor they have a need to have AI projects in the organization. It is in the targets for sure. Almost everywhere and they have a budget and they have some kind of milestones. They also need to report it on some steering committee to their supervisors, to their directors, how the project is going on. So that is why we also need a piece of waterfall here that to be sure that we did not use the full budget.
AK: It may be seen as a nuisance to have to do this but usually customers do not have unlimited budgets and unlimited time to do things. We need to have this waterfall path to communicate with the stakeholders and to kind of do also self diagnosis. Where are we? Are we going at the speed that we wanted? Do we want to pivot? Do we want to change technology? Are we on our way to achieve something?
AI projects are long-term maintenance, not short-term products
DS: What is also important when we are talking about the budgeting, this is maybe very important for sponsors. The AI projects, from what we see, it is not something what is the direct product you will have. It is a long maintenance process. Once you decide to have an AI project or just AI tool or solution, you need to know that maybe in a half of a year the technology will change so much that we will need to change something across this project.
The models are learning all the time. You need to maintain it somehow. It is not something that you are putting the money on a table and in three, six, seven months you have a product. We recognize at this moment three or four main projects in AI projects that we have on the market. The first one is usually POC or just the experimental projects. We want to try something.
Later we have the biggest projects, this is the deployment one. The third one I see today is the buying an AI tool already, like for example Copilot, and implementing it to our teams. But it is always like with any single tool or application, you are buying it but you need to know how to use it. When we are thinking about implementing Copilots or just tools which are on the market, we need to be aware that we need to buy also a knowledge, or just some trainings to show to people how to use those tools.
AK: Implementation is not just buying the licenses and installing it in people’s computers, right? They need to know how to use it.
The fourth project type: AI strategy
AK: You talked about three kinds of projects. What is the fourth?
DS: I am thinking about the strategy. A lot of companies thinking right now about AI strategy. I would say this is the fourth one which I see currently. It is very important to know what do we want to implement, what kind of tools and with what kind of resources are we going to work with. It is extremely important to not run the project without the whole strategy and without changing the culture and changing the thinking and the organization.
AK: But this is something that we again learning from mistakes a lot of companies did with RPA. You need to do all those things kind of in parallel. It should work in parallel because once you build a strategy you need to also have some kind of knowledge what is possible in our organization. Absolutely important to have the strategy but to not start the strategy first.
DS: Yeah. Exactly.
AK: Or you are going to have to pay a lot of money to some consulting to prepare one for you. And by the way, they are probably going to generate half of it with Gen AI. Dagmara, I know that you have come up with a set of truths or rules for managing the project. Tell me, I am really eager to hear what do you think those would be.
The 10 golden rules for managing AI projects
DS: The first one definitely is to manage the talents. You need to know what kind of capabilities, what kind of resources you have. First of all you need to build a team. Second also to train those people.
AK: But this also comes down to those technologies. They are extremely complex technologies but the tools that allow you to use those technologies are not that complicated, right?
DS: That is right. You need many times a very good analyst. The barrier of entry is no longer you have to be really good at mathematics and statistic and so on. You can be a really good power user without this kind of knowledge. That is why business analysts are perfect to start to work with AI.
DS: If I want to jump to the second rule, it is to try to find some kind of AI champions and to show them up on our environment.
Another rule, I think that is especially for the managers, is to manage the expectations of the sponsors. To show them that this is not a very straight, straightforward way. If we want to have an AI solution, we need take a risk.
AK: These projects are kind of R&D in a way that you are not sure whether, because those technologies are so fresh. We do not really know. We can think something will work but there may be surprises.
DS: The fourth one is learning. We need to push a lot of effort to learn ourselves and to teach our teams. I think that this is a very important part, and it is something what we really need in our organization right now, just to learn.
AK: To learn, to share this knowledge and to kind of observe what is going on. With the technology changing so rapidly right now, what you know today may be obsolete tomorrow.
DS: We need to be aware and we need to have this knowledge.
AK: We need to make an informed decision.
DS: Another thing is we really need to have clear data before we start to have an AI project. If you have the mess in your data, unstructured data, it is much more difficult to train the model properly and to have proper answers later.
AK: Gen AI is better at dealing with unclean data than say machine learning, but it still is a big problem and we need to be able to manage it. We are never going to have 100% clean data but we need to kind of strive for it.
DS: It is really important to also maintain the models properly. This is the next point, the maintenance. The AI project is not finishing as one simple product at the end. It will be always maintaining.
AK: Especially with Gen AI technologies testing is quite hard because you get non-deterministic results. You need to have a way of testing it thoroughly to make sure that it does not get worse over time.
DS: Definitely testing is something what differentiates the AI projects from the different projects that we have on the market, like IT projects, RPA projects, or some others. I think that all of those AI projects, we need to be sure that this is ethical, that we are transparent in communication in our organization. We need to be sure that we are communicating what the tool will bring us, how we should work with this, and we should very clearly communicate it to the organization that people feel safe.
DS: Coming back to those points, I was thinking about some measurements and KPIs. We always need to be sure how to prove that something goes good or bad and we also need to know how to measure the impact of AI in our environment.
AK: You have to think about the metrics that will allow you to do that.
DS: I think that the communication and thinking about changing the culture and to be open on this in our organization is also very important. Also the managing of the expectations because sometimes those are very, very high.
AK: The expectations can be extremely high. People go to conferences and they listen to consultants basically selling them unrealistic claims. You need to keep it grounded, not to promise things you will not be able todeliver because this could be an end of your AI program and AI strategy.
DS: Exactly. The last point is to be open on a change. The technology is changing very, very, very fast. After half of a year they could put it to the bin, because the technology already was discovered by some other company and you can buy it just on the market, you do not have to build it by your own.
Conclusion
AK: Those were Dagmara’s 10 golden rules for managing AI projects. I think it would make a great article for our knowledge section of our web page.
DS: We can follow it.
AK: Dagmara Sysuła, thank you so much for joining this episode of the podcast and for sharing your thoughts and your experiences on AI projects.
DS: I am worried about that in a few months we can also put all those rules to the bin or to the garbage because something will change. But we are open for this and waiting for the good future.
AK: And that is our delivery for this episode. Hopefully, it arrived on time, within budget, and did not hallucinate any strange new features along the way. Thank you for downloading this bucket of information into your brains. A huge thank thank you to our guest Dagmara Sysuła, the battle-hardened delivery manager who navigated us through the chaos without a single Jira ticket getting lost. And of course to Anna Cubal, our own master of delivery who produces this show from the legendary Wodzu Beats Studio, our very own development environment. Until next time, may your data be clean and your stakeholders reasonable. Mata ne.