In Practice: AI in the Enterprise | Day 2: When the Board Asks “Who Owns This AI?”, Your Answer Reveals Everything

Your org chart is a lie about how AI actually gets made. Not intentionally. But fundamentally.

Pull out the structure your company publishes. Find the AI or data science team. Now trace the lines. Probably reports to VP of Engineering. Or CTO. Or maybe there’s a Chief Data Officer. Clean lines. Clear hierarchy.

Now describe what actually happens when someone builds an AI system in your organization.

The data science team trains the model. But they didn’t pick the business metric it optimizes for, someone in product did, or maybe operations. They didn’t decide whether to deploy it, that’s usually a product decision, sometimes influenced by finance. They’re not monitoring what it does in production, operations handles that, or risk management, or sometimes nobody specifically owns it. They certainly aren’t making the call if something goes wrong.

So when the board asks “who owns this AI?” and you point to the org chart, you’re not lying. You’re just describing something that has nothing to do with how the decision actually gets made.

This gap, between formal ownership on paper and actual accountability in practice, is where most AI governance breaks down.

The Difference Between Ownership and Accountability

Start with a definition. “Who owns it” in a traditional sense means: whose budget, whose performance evaluation, whose responsibility if it fails. On an org chart, that’s usually clear. But ownership in the org chart doesn’t mean accountability for outcomes.

Accountability means: you made the decision to deploy this, you understood the risks, and if something goes wrong, you face the consequences. You’re the person who can’t pass the problem to someone else.

In most organizations, AI systems have plenty of ownership but almost no accountability. The data science VP owns the team. But the product leader who defined what “good” meant owns the business decision. The ops leader who deployed it owns the infrastructure. The compliance team owns the audit trail. Finance owns the budget impact if it goes wrong.

Everyone owns a piece. Nobody owns the whole thing.

This creates a specific organizational pathology: distributed blame. Something goes wrong—a model makes a bad prediction, or shows bias, or violates a regulatory expectation. Now it’s “well, the data was the problem” (data engineering’s issue), or “the business case was wrong” (product’s issue), or “we didn’t monitor properly” (operations’ issue). Everyone points at a piece of the system they don’t own, and nobody has to fully own the failure.

Compare this to how a traditional business decision gets made. If a CFO approves a capital expenditure and it goes bad, they own it. They were accountable. They had authority to say no. They understood what could go wrong. That structure, clear authority, clear accountability, clear consequence, is what actually drives good decision-making.

AI hasn’t broken accountability as a concept. It’s just exposed that most organizations never gave anyone accountability for AI decisions in the first place. They distributed ownership across multiple teams and called it governance.

Why Org Charts Fail for AI

The problem starts with how data science evolved in most enterprises.

Early data science was an analytics function. It reported to someone, usually an engineer, or a data leader under finance or analytics. Small teams. Answering specific questions. Then AI became strategic, and suddenly these teams were making decisions that could reshape customer experiences, cost millions of dollars, or create regulatory exposure. But the organizational position didn’t change. The team still reported to the same place. Still had the same scope on paper.

Except now they were making much bigger decisions, across multiple business units, with risks that weren’t on their radar.

Meanwhile, product, operations, and compliance weren’t designed to own AI decisions either. Product teams decide what features to build, not how to train models. Operations teams run infrastructure, not model performance. Compliance teams audit outcomes, not design systems before they’re deployed. So you end up with a situation where the people technically closest to the decision (data science teams) don’t have the authority or perspective to own it, and the people who have the authority (business unit leaders) aren’t positioned to understand the technical implications.

This is why “building an AI governance committee” doesn’t actually solve it. Committees are great for coordination. They’re terrible for accountability. Everyone on the committee has other priorities and other people they’re accountable to. Decisions get diffused. Accountability dissolves.

The org chart assumes that ownership flows upward, one person in the hierarchy is ultimately responsible. AI breaks that assumption by making decisions actually horizontal. You need product perspective, technical capability, risk understanding, and compliance knowledge simultaneously. No single person has all of that.

What Accountability Actually Requires

If you want clear accountability for AI decisions, you need three things on paper that match three things in practice.

First: A clear decision-maker. Not a committee. One person. This person needs to have the authority to say no to deployment. The authority to pull a system that’s not working. The authority to redirect resources. That person probably needs to be a business leader, the person whose P&L or customer outcomes are affected, not a technical person. Because the decision being made isn’t “is this technically possible,” it’s “should we take this risk given our business situation.”

Second: Clear information at decision time. That decision-maker needs to know what the model does, what it can go wrong, what the business case is, what the regulatory exposure is, and what monitoring will tell us if it’s degrading. They need this before they approve. Not after. Not in a report six months later. At decision time. That means the technical teams, product teams, and risk teams have to feed information to that decision-maker in a structured way. Not “here’s what we found,” but “here’s what you need to decide.”

Third: Real authority over what happens after. This is where most organizations fail. The decision-maker approves deployment. Then operations runs it. Risk monitors it. Finance tracks costs. Product owns the feature. Nobody has the authority to actually change what happens based on what’s learned. You need that decision-maker, or someone they explicitly delegate to, to have the authority to reconfigure, halt, or redirect based on post-deployment information. Otherwise, the decision-making authority is theatrical.

The Signals You’re Getting It Wrong

Here are the patterns that show up when accountability is actually distributed rather than clear:

  • You have a data science team reporting to engineering and a separate AI ethics team reporting to compliance, and they don’t actually coordinate before deployment because they’re in different chains of command.
  • You have a governance committee that reviews AI projects, but the committee has no power to block deployment, it exists to document that review happened.
  • You’re using three different job titles for people doing similar roles (because the org structure doesn’t actually match the work).
  • When something goes wrong with an AI system, the first five conversations are about whose fault it was, not about how to prevent it next time.
  • You have a Chief Data Officer and a VP of Product making different decisions about the same system because they’re optimizing for different metrics.
  • Your risk and compliance teams are most engaged after deployment, not before it.

What Actually Works

The organizations I’ve seen successfully clarify this usually make three changes:

  • They explicitly assign decision authority to a single person, usually someone in the business (product, operations, or line of business) with real budget and outcome accountability. That person owns the deployment decision.
  • They structure information flow so that person gets input from technical, product, risk, and compliance teams before the decision, not after. This usually means regular design reviews with clear documentation of what each team validated.
  • They give that decision-maker or their delegate ongoing authority to act if something changes. No separate approval needed to reconfigure or halt. The authority flows from the decision-maker, not upward through hierarchy.

This doesn’t require changing your org chart. It just requires being explicit about who actually decides and making sure the org chart doesn’t contradict that. When the board asks “who owns this AI?” the answer should be one name. Not a team. Not a committee. One person who made the call and can defend it.

That clarity, about who decided and why, is what governance actually looks like. Everything else is just process.

In Practice: AI in the Enterprise | Day 1: The Moment Your AI Strategy Became a Governance Problem

I watched it happen in three different organizations within a year. Each one had done the hard work—hired the right talent, built capable systems, deployed models that worked. Then came the moment when someone in the board room asked a question that sounded simple: “Who approved this?”

The room went quiet. Not because they didn’t have an answer. They had too many answers, all contradicting each other.

In one case, the answer was the VP of Analytics. In another, Product. In a third, it was “well, the data science team built it, and operations deployed it, so…” That trailing off matters. It’s the sound of governance architecture breaking under the weight of something genuinely new.

This isn’t a technology problem. It’s an organizational design problem. And most enterprises haven’t yet realized their governance structures—the things they’ve spent decades perfecting for traditional software, for regulated processes, for operational risk—are fundamentally misaligned with how AI actually works in practice.

Where Traditional Structures Break

Traditional governance works because it assumes clear ownership boundaries. The application owner is responsible. The security team reviews. Compliance signs off. Audit checks the box. There’s a chain of custody.

AI breaks this. Not because AI is magic, but because it distributes responsibility across domains that don’t typically talk to each other. A model’s behavior depends on data quality (traditionally data engineering’s problem), training methodology (data science), business logic interpretation (product), deployment infrastructure (operations), and ongoing performance monitoring (analytics or sometimes risk management).

When something goes wrong—a model drifts, produces unfair outcomes, or makes a costly decision—the question “who owns this?” becomes genuinely difficult to answer. Was it the data scientists who built it? The engineering team that deployed it? The business stakeholders who set the success metrics? The person who defined what “fair” means?

I’ve heard CFOs push back on AI initiatives not because they doubt the technology works, but because they can’t see the accountability chain. That’s not caution. That’s competence. They’re asking the right question, just about the wrong structure.

What Governance Actually Needs to Answer

Real governance has to answer three things:

First: Who can approve the deployment decision? Not who built it—who actually says “yes, this goes to production.” The temptation is to assign this to the most senior technical person in the room. That’s backward. This decision is business risk, not technical risk. A model can be technically sound and still a bad business decision. The person who signs off on deployment needs to understand what it does, what can go wrong, and what the costs are. They need to be positioned in the organization so that cost falls on them if it materializes.

Second: Who monitors for the specific failures that matter? Traditional systems have ops teams watching for downtime. But AI systems that are running perfectly fine technically can still be producing systematically biased outputs, or slowly drifting away from the decision quality they had on day one. You need someone looking for those failures. The person has to understand what they’re looking for. And they have to have the authority to pull the cord if they find it.

Third: Who decides what the model is actually supposed to optimize for? This is the one that trips people up. Data scientists are trained to optimize for mathematical objectives—accuracy, AUC, F1 scores. But a business decision that’s technically accurate can still be wrong. A lending model might predict default probability accurately but produce disparate impact. A hiring model might predict job tenure accurately but systematically screen out qualified candidates from certain groups. The technical metrics don’t capture what actually matters.

Getting this wrong creates a specific kind of organizational failure: governance theater. You’ll put in a review process. You’ll create a checklist. You’ll assign AI governance committees. And then you’ll still have decision-making happening in the gaps—skipped reviews, reinterpreted policies, informal approvals. This happens not because people are cutting corners; it’s because the formal structure doesn’t actually address the real decision point.

The Three Gaps Most Organizations Face

In the organizations I’ve watched go through this, three patterns appear consistently.

The first is accountability without authority. Risk and compliance teams are asked to govern AI but don’t have the budget authority, the technical knowledge base, or the position in the decision flow to actually prevent bad deployments. They review after decisions are made. They’re auditors, not governors.

The second is speed pressure meeting governance design. You’ll see this as “we need to govern responsibly, but we can’t slow down.” So you design governance that theoretically makes sense but requires five review gates, each with different stakeholders, none of whom are empowered to make the final call. Then the organization learns to route around it. You end up with faster, less governed decision-making, not slower, more governed decision-making.

The third is confusing process with governance. You’ll see organizations build elaborate approval workflows for AI—more elaborate than they have for traditional software—thinking that more process equals better governance. It doesn’t. Better governance is clearer accountability, better information to the person making the decision, and real authority to say no.

What Actually Works

The organizations that get this right share something: they’ve explicitly designed governance around how decisions actually get made, not around what they wish would happen.

They assign clear approval authority—often to a business owner, not a technical person—with explicit responsibility for the downstream impact if the decision goes wrong. They build monitoring into the system itself, not as an afterthought, with someone empowered to act on what’s learned. They translate business requirements into the actual constraints that matter for model behavior, and make those explicit at training time, not as a risk to be managed after deployment.

None of this requires new technology. It requires thinking through organizational design with the same rigor you’d apply to any other high-risk process.

The moment your AI strategy became a governance problem wasn’t when you deployed your first model. It was when you assumed your existing governance structure would work for something fundamentally distributed across your organization. Most enterprises haven’t yet recognized this moment. When you see the room go quiet at the question “who approved this?”—that’s when you know you’re there.

The good news: this is solvable. It’s just not a technical problem.

Is Ethereum Waffle throwing installation tantrums with Ubuntu?

Recently, I have been spending lots of time with Blockchain, Etheruem, and Smart Contracts. Ubuntu continues to baffle me, it’s AWESOME, secure, and very dependable, however, deploying/configuring things sometimes will take an army to make it happen, especially with Linux’s useless error messages.

In this post, I am sharing a troubleshooting journey that I went through with installing Ethereum Waffle on an Ubuntu development virtual machine. It all started with the following error message

undefined ls-remote -h -t https://github.com/ethereumjs/ethereumjs-abl.git

Check out the screenshot below, as soon as I run

npm install ethereum-waffle

Ethereum Waffle Installation Error

hmm, where to start? Looking thoroughly into the error output, I found this interesting line

This is related to npm not being able to find a file.

From experience, usually, the first thing to start with is making sure that I am running the latest stable build of Node.js. A quick version check with

node -v

shows that I am not running the latest version. Instead of v16, I am running v10. Let’s fix that, so I started first with

sudo apt update” and “sudo apt upgrade

Update & Upgrade Related Packages

Once done with that, let’s make sure we get the latest version of Node.js. A quick search over Google, I landed on a Node Source article that touches base on this. Fast forward, make sure you have CURL installed and run this

curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash

Use CURL to get the latest version from Node.js

As we have the latest binaries now available for installation, go ahead and run

sudo apt install -y nodejs

Installing the latest Node.js binaries

Doing a quick version check, I see we are running the latest build version v16. now let’s try installing Ethereum Waffle again, and crossing my fingers this time it will work. Otherwise, this article will grow longer 😉

Successful installation of Ethereum Waffle

Yes, the installation was completed successfully.! So, the moral of this, if Ethereum Waffle is throwing a tantrum during installation, make sure you are running v16 or higher of Node.js.

Resolving passive Dual SIM behavior with Samsung’s new S21 Plus phone

As excited as I was to get my hands on the new S21+ phone, I was quickly disappointed. When the eSIM profile was activated, the other provider’s physical SIM was turned off by Android.

I was not able to keep both eSIM profile and physical SIM to function at the same time! Check out the screenshots below.

What is going on? I asked myself. I was in disbelief. After all, this is a flagship phone. You would expect this behavior from a lower-tier device.

Being curious by nature and problem solver by trade, I went through the phone specs description online inside-out and it clearly stated supporting active Dual SIM features. Why was it acting as a passive Dual SIM phone? Looking online over the forums did not help either.

I started to think that this was related to the Android software installed on my device.

So, how did I manage to restore active Dual SIM features?

Apparently, the solution was fairly simple, I had to make sure that the physical SIM was placed in slot 1, and not in slot 2. Stupid design by Samsung? Why would I as a user need to bother with the fact that an active Dual SIM behavior would require the physical SIM to be in slot 1 and if it’s in slot 2 the phone will only allow passive Dual SIM behavior. Dear Samsung, either block Dual SIM behavior completely if the physical SIM slot 1 is empty to force the user to consider changing the SIM slot or (even better) remove this weak design dependency and enable active Dual SIM without relevant dependency to which slot is being used.

Active Dual SIM behavior working after swapping physical SIM slot

I was excited when this worked 🙂 If any, this incident should be an example of why user experience designers must work closely with non-tech-savvy users.

Opps.! You can’t change the Screensaver due to a permissions related error on Windows 10? I have your workaround here

Today, I am installing the new Microsoft Flight Simulator 🙂 Super excited, well until I realized that its approx.. 92GB in size. Nevertheless, I played this game since the MS-DOS version, and I enjoyed playing the 1998, 2002, 2004, and X releases. FSX was awesome, I remember joining a virtual airline at VATME, and signing up for flying simulated flights on real routes onboard A320 and A330. I was serious about collecting those virtual flight hours to rise in rank, so I picked long flights all the time, and I would sometimes set the route with IFR and nap next to my machine to crunch the hours, and get some sleep. My wife was always in disbelief rejecting the idea that I can fly an Airbus 320 with a HOTAS controller. One of my friends was a real captain, and he flew a commercial airliner, sometimes I would be discussing my virtual flights with him, and he was kind enough to entertain my questions, while everyone just looking at me funny with the wish of yelling: “You are not a real pilot”, oh yes the good old days 😀

Back to the mammoth installation, check the screenshot below. Its these moments that makes you wish for 5G to come sooner:

92GB to install the new Microsoft Flight Simulator

So what does this have to do with the Screensaver error? 😉 Well, while waiting for this installation to finish, I started thinking about the old days with FSX, and remembered that I had an awesome Screensaver back then for FS2004. So naturally, I started thinking about Screensavers, and was curios to see what Screensaver does Windows 10 have.

To my bad luck, as soon as I right-clicked the Desktop and picked Personalize:

Desktop Context Right-Click Menu (Yes, fancy name)

then choose “Screen saver settings” under “Lock screen”

Control Panel

aaaaand “bam bam”, a weird looking error message: “Windows cannot access the specified device, path, or file. You may not have the appropriate permissions to access this item.”.

Scary Screensaver Error Message

That was a first for me. I had Local Administrator permissions on this machine, so why would this pop-up? So naturally I started thinking that I have 2 options, either repair my Windows installation, or simply find a way launch the Control Panel dialog programmatically with elevated permissions. Obviously I picked the second option. I am too lazy to think about repairing this for good. To be honest, I even setup a shortcut to the solution I am about to share with you on my OneDrive to persist installations & refreshes. Yes, that lazy 😀

After a bit of digging, I found this URL which explains how I can programmatically launch the Control Panel Screensaver dialog. All I had to do was open the Run dialog (WIN + R) and type the below.

Note that the Windows Dev Center article assumes you have a Screensaver file that you want to install. You can just put any name there, I used “dummy.scr“:

rundll32.exe desk.cpl, InstallScreenSaver dummy.scr
Windows 10 Run Dialog

Drum roll 😉 and the dialog is up and running.

Windows 10 Screensaver Dialog

Bummer though I need to run this command every time I need to change the Screensaver. Its not like I change it often, so it does not really suck. I believe its been years since I had a Screensaver turned on. I created a shortcut with this command line and stored on my OneDrive for future use. Maybe you can do the same, or just decide to Repair your Windows 10 installation. Not sure if the Screensaver is worth all that hustle. Up to you 😉

Troubleshoot your Outlook: sending weird random content of Asian characters when you respond to meeting invites.?!

So sharing one wild story I went through in the past couple of days 😉 Whenever I responded to a meeting invite using Outlook on Windows 10, it misbehaved and replaced my message with random content using an Asian mix of characters. Check out the screenshot below. I was not able to reproduce this behavior using Outlook Web Access or Outlook for Mobile.

Outlook invite response

This all started when a colleague of mine pinged me over Teams and asked me if I spoke Chinese. I smiled and said: “I wish. Why would you think that?” I asked. He told me that my meeting invite response was not in English. So naturally I thought he was messing with me 😛 and I checked the Sent folder in Outlook. To my surprise, indeed and for days my responses to meeting invites were being replaced with that mix of Asian characters. What surprised me even more, was that: content was continuously changing with every message.

At that moment, my head then rushed into wild conclusions. Maybe someone hacked my machine and this is a message in their mother tongue? Is this a ransom message? Why is it that subtle though? Why not contact me directly? I checked out both Bing and Google Translators. Google detected it as Chinese while Bing as Japanese and both were not able to translate it. Check out screenshot below. I then checked if any of my external contacts/customers have been exposed and reported this to IT.

Bing Translator
Google Translator

After having couple of remote desktop sessions with IT, we ruled out a an attack and came to the conclusion that this was an Encoding problem caused by a beta Windows 10 feature.

The resolution was simple: uncheck the box called “Beta: Use Unicode UTF-8 for worldwide language support.”. You can find that under the Administrative tab in the Region settings of the Control Panel. Check out the screenshot below. As soon as you apply this change, Windows 10 will ask for your permission to restart the machine.

Solution: uncheck beta Unicode checkbox

Well that’s it 🙂 Outlook will go back to normal behavior. When I initially faced this problem, I did not find any resources online, therefore sharing this with you, as I am pretty sure others will go through this issue too.

Access and consume a SharePoint Online Custom List from an ASP.NET Web API using CSOM and Bearer Access Tokens

In this post we will explore accessing a SharePoint Online Custom List from an ASP.NET Web API using CSOM. You can find the source code on Git Hub: https://github.com/codedebate/Samples-SPOAccessWebAPI.

While many services exist today to ease access and orchestration of data and process flows like Power Platform and Azure Logic Apps, etc. You will run into scenarios where your own REST API is required. With embracing the citizen developer culture at a large number of companies today, many power users resort to PowerApps for building single-purpose basic business applications and use SharePoint Online Custom Lists as the database. This is why I wrote this blog post 🙂

When dealing with on-premises SharePoint deployments, SharePoint farm solutions have dominated the stage for a long time as the high risk/high reward type of answer to every problem, be it Event Receivers, Workflows, etc. Back then, you really had to know what you were doing, otherwise you might crash the whole SharePoint farm if not being careful. The catch of course was that your DLLs always sat on the SharePoint Servers with the .NET Framework Global Assembly Cache (GAC).

With SharePoint Online, this is not the case, you will need to use SharePoint Client Object Model (CSOM) and you will need to to include extra plumbing work for access authorization using bearer access tokens. If not done right, you will waste time chasing error codes across the web.

So let’s get to know the sample we are about to build. For starters we have a Custom List called “GarageParkedCars” and its there like any pointless demo asset to log something, in this case it’s being used to store cars being parked in our office garage. For each record we store: the plate numbers, driver, parking spot number, and the date. Check the list below. We will build an ASP.NET Web API that will use CSOM to consume the Custom List.

Sample Demo Custom List

Step 1: Know your NuGet packages

To do this correctly, and avoid wasting your time, you will need to install the following packages after creating your ASP.NET Web API project. Make sure you respect the sequential order below:

  1. Microsoft.IdentityModel
  2. Microsoft.Identity.Model.Extensions
  3. AppForSharePointOnlineWebToolkit

Yes the name AppForSharePointOnlineWebToolkit sounds weird. Trust me, you will need it to use CSOM effectively and consume access tokens to access content from SharePoint Online.

Now, you might ask, why the panic and warnings for following the order mentioned above, if not, well then you will end up with the following fun error message: “Failed to add reference. The package ‘AppForSharePointOnlineWebToolkit’ tried to add a framework reference to ‘Microsoft.IdentityModel’ which was not found in the GAC. This is possibly a bug in the package. Please contact the package owners for assistance.Reference unavailable.“. Check out the screenshot below for dramatic effects 😉 Looking for a resolution over the Web, especially for this error, will take you to the wrong places.

Fancy error message when you mess up the installation order

Step 2: Register your Web API in SharePoint Online

To do that, use your site collection URL and navigate to “/_layouts/15/AppRegNew.aspx”. Since I am not actually deploying this sampleWeb API and will be testing it from my machine locally, I used localhost. Make sure to use the correct domain, and watch out for sub-domains.

Registering you Web API at /_layouts/15/AppRegNew.aspx

Once done with the registration, make sure to note both Client Id and Client Secret. We will need them both to request the access token later on when accessing the SharePoint Online Custom List.

Once registration done, write down the Client Id and Client Secret

Step 3: Authorize your new registered Web API in SharePoint Online

To do that, use your site collection URL and navigate to “/_layouts/15/AppInv.aspx”. To start, paste the Client Id into the App Id field and click on Lookup. Not sure why it’s called App Id instead of Client Id.

Authorizing your Web API at /_layouts/15/AppInv.aspx

Don’t be spooked by the name “App’s Permission Request XML”. This is simple XML block that defines what type of authorization the app will have, in this case our Web API. For the sake of this demo, I used the below block, which simply granted my Web API access to the Site Collection. Don’t do this in a real production setup. Make sure to check the link here for the examples of what can be the right XML block depending on your scenario.

<AppPermissionRequests AllowAppOnlyPolicy="true">
<AppPermissionRequest Scope="http://sharepoint/content/sitecollection" Right="FullControl"/>
<AppPermissionRequest Scope="http://sharepoint/content/sitecollection/web" Right="FullControl"/>
</AppPermissionRequests>

Once you create the authorization, SharePoint will ask you if you Trust the Web API with the permissions you have listed in the XML block, this time they are listed in simple English.

Trusting your Web API with the new permissions

Side Note

At any time you can remove the authorization by navigating to the Site Settings page and clicking on “Site collection app permissions”

Site Settings page

Once in, you will find a list of all registered applications, and in our sample the Web API. To revoke access, simply proceed and delete the registration.

Revoking app permissions from Site Settings

Step 4: Add the AppSettings keys to the Web API configuration file

So to recap our progress until now:

  • We created the ASP.NET Web API project and installed the needed NuGet packages
  • We registered the Web API in SharePoint Online and granted it access authorization

Now, we need to take advantage of the web toolkit and make sure it can request an access token for our CSOM requests to SharePoint Online. To do that, we need to add both ClientId and ClientSecret keys and use the values we obtained earlier when we registered the Web API in SharePoint Online. Make sure you have the correct spelling for the keys, otherwise web toolkit will not be able to find them and request the access token.

ClientId and ClientSecret in Web.config

Step 5: Use CSOM with CAML to query the SharePoint Online Custom List

Now we are ready to get some action with SharePoint Online and our Web API. Let’s start first by creating a model to simplify access and storage of the Custom List items.

public class GarageParkedCar
{
    public string PlateNumber { get; set; }
    public string Driver { get; set; }
    public string ParkingSpot { get; set; }
    public string RecordCreated { get; set; }

    public GarageParkedCar(string plateNumber, string driver, string parkingSpot, string recordCreated)
    {
        PlateNumber = plateNumber;
        Driver = driver;
        ParkingSpot = parkingSpot;
        RecordCreated = recordCreated;
    }

    public GarageParkedCar()
    {
    }
}

Next, make sure to update your Web.config file with two additional keys in AppSettings, one for the WebUri and the other is for the ListTitle. The WebUri is the URL to the SharePoint Online site that hosts the Custom List and the ListTitle is the name of the Custom List.

WebUri and ListTitle in Web.config

Finally our GET action. I have included comments inline with the code to explain it. Couple of thoughts here:

  1. We create a collection to store the list items coming back. Some folks like to use DataTables, I like to use a strongly typed approach instead 🙂
  2. We use Collaborative Application Markup Language (CAML) to build queries for filtering the list items on SharePoint Online instead of getting all list items and using LINQ. This comes very handy and performant when your lists have lots of list items and/or large number of requests
// GET: api/GarageParkedCar
public IEnumerable<GarageParkedCar> Get()
{
    // The collection we will use to store and return 
    // all the records coming back from the SharePoint Online Custom List
    var response = new List<GarageParkedCar>();

    // Get the URL to the SharePoint Online site
    var webUri = 
        new Uri(
            ConfigurationManager.AppSettings["WebUri"]);

    // Get the access token. The web toolkit will do all the work
    // for you. Remember, you will need ClientId and ClientSecret in the Web.config
    var realm = TokenHelper.GetRealmFromTargetUrl(webUri);
    var accessToken = TokenHelper.GetAppOnlyAccessToken(
        TokenHelper.SharePointPrincipal,
        webUri.Authority, realm).AccessToken;

    // Initialize the SharePoint Online access context 
    var context = TokenHelper.GetClientContextWithAccessToken(
        webUri.ToString(), accessToken);

    // Create an object to access the SharePoint Online Custom List
    var garageParkedCarsList = 
        context.Web.Lists.GetByTitle(
            ConfigurationManager.AppSettings["ListTitle"]);

    // Create a new query to filter the list items. As we are looking to 
    // retrieve all items, you can leave the query blank
    var query = new CamlQuery();

    // Create an object to store the list items coming back from 
    // SharePoint Online and execute the query request
    var garageParkedCarsCollection = garageParkedCarsList.GetItems(query);
    context.Load(garageParkedCarsCollection);
    context.ExecuteQuery();

    // Loop all list items coming back, and create a new object from our
    // model for each list item, so we can access and process them
    foreach (var item in garageParkedCarsCollection)
    {
        response.Add(
            new GarageParkedCar(
                item["Title"].ToString(), 
                item["Driver"].ToString(), 
                item["ParkingSpot"].ToString(), 
                item["Created"].ToString()));
    }

    // Return the collection back to as the response 
    return response;
}

So, let’s go ahead and run our Web API, simplest way, hit F5 and navigate to “/api/GarageParkedCar“. This will trigger the GET action, and query SharePoint Online for the list of parked car records in the garage. For dramatic effect, the screenshot below uses the XML output instead of JSON 😉

What’s Next?

You have a simple way for accessing SharePoint Online resources using CSOM. The web toolkit takes care of authentication/authorization, and simply just looks up Client Id and Client Secret from your Web.config file. A word of advice to close this post, always try exploring Azure Logic Apps, Functions, Flow, and other no-code/low-code tools before considering writing your own REST API. It’s awesome to have your own code running, I get it. Yet from a maintenance and evolution perspective, double check if this is really needed.

Digital Transformation in Sports session at Data Science & Technology Club at the University of St.Gallen

Yesterday, we had the pleasure of meeting post-graduate students from the University of St.Gallen. We spoke about AI-infused #digitaltransformation in the #sports industry with a focus on fan engagement. The discussion spanned various topics about what Sports Clubs and Federations are doing to leverage AI-infused technology on #Azure and #PowerBI to gain a better understanding of their fans (millennials & Gen-Z); personalize their content & experience, and target them with specific digital engagement activations. I’d like to thank the students from the University of St. Gallen for their active participation during the event, and especially Jana Plananska from the Data Science & Technology Club at the University of St.Gallen for hosting and coordinating the event.

#artificialintelligence #switzerland #digitalengagement

AI in Procurement & Sourcing

Procurement and sourcing functions play a big role in our organizations today. Computer vision, knowledge mining and advanced analytics can help to identify risk positions across the supply chain, review and approve purchase orders, analyze spending and manage sourcing events. By applying these technologies, companies benefit from fast, efficient and effective processes to manage their supply chain.

Bianca and I are back with the seventh episode of One Minute AI. This time we will talk about infusing AI in Procurement & Sourcing. Don’t forget to check out the PointDrive here: https://lnkd.in/dKa5yD7

Episode 7: Procurement & Sourcing

Detect and Respond to Digital Crimes

Fraud in banking is known as a growing phenomenon for quite some time. In a world of digitalization, also the ways of fraud have been digitized. Speed is key to detect it and therefore the traditional detection methods will need to be infused with AI as it can help to detect it in seconds and reduce the enormous amounts of false positives.

Bianca and I are back with the sixth episode of One Minute AI. This time we will talk about detection & prevention of digital crime in the financial services industry using AI. Don’t forget to check out the PointDrive here: https://lnkd.in/gue936d

Episode 6: Detect and Respond to Digital Crimes