Kinda want to send this to my company lol
According to the M365 Copilot monitoring dashboard made available in the trial, an average of 72 M365 Copilot actions were taken per user.
“Based on there being 63 working days during the pilot, this is an average of 1.14 M365 Copilot actions taken per user per day,” the study says. Word, Teams, and Outlook were the most used, and Loop and OneNote usage rates were described as “very low,” less than 1 percent and 3 percent per day, respectively.
Yeah that probably won’t have the intended effect…this basically just shows that AI assistants provide no benefit when they’re not used and nothing else.
People probably tried it, found out that it’s crap and stopped using it.
Its hardly possible to actually test it properly in relation to your work and changes in productivity with a single query per day. It
I’m not a programmer, so it’s got nothing to offer me. Mostly my job is to write documentation for propriety software and hardware, stuff the AI knows nothing about, not everyone in the world can mak use of AI, and it doesn’t require a PhD and 30 days of constant usage to work that out.
I’m not saying AI specifically is useful, just that people in general tend to resist change in their work methods regardless of what they are.
I also work with a lot of proprietary knowledge, chemical and infrastructure in my case, and AI still can be useful when used properly. We use a local model and have provided it with all our internal docs and specs, and limited answers to knowledge from these, so we can search thousands of documents much faster, and it links to the sources for it’s answers.
Doesn’t do my job for me, but it sure as shit makes it easier to have a proper internal search engine that can access information inside documents and not just the titles.
Then maybe it’s not useful for you. That doesn’t mean AI isn’t useful for a number of other roles.
I’m a software developer and find its code generation to be awful, but I also find that it’s great at looking up technical information. Maybe I’m looking for a library to accomplish a task, and I want to compare features. Or maybe I’m having trouble finding usage examples for a relatively niche library. Those are task the AI is great at, because it can look at tons of blog posts, stack overflow questions, etc, and generate me something reasonable that I can verify against official docs.
If my workflow was. mostly email and internal documentation, yeah, AI wouldn’t be that useful. If my workflow relies on existing documentation that’s perhaps a little hard to find or a bit poor, then AI is great. Find the right use case and it can save time.
Then maybe it’s not useful for you. That doesn’t mean AI isn’t useful for a number of other roles.
Case in point, as per the article, AI is pretty useless for regular office work
“Regular office work” is a pretty broad category. Yeah, it’s probably not useful in retrieving records for someone or processing forms, but it should be useful for anything that requires research.
We have it on our system at work. When we asked what management expected it to be used for they didn’t have an answer.
We have a shell script that ingests a list of user IDs and resets their active directory passwords, then locks the account, then sends them an email telling them to contact the support desk to unlock the account. It a cron job that runs ever Monday morning.
Why do a need an AI for when we can just use that? A script that can be easily read understood and upgraded, with no concerns about it going off-piste and doing something random and unpredictable.
So yeah, they don’t use it, because it won’t work.
Well yeah, AI shouldn’t replace existing, working solutions, it should be used in the research phase for new solutions as a companion to existing tools.
.this basically just shows that AI assistants provide no benefit when they’re not used and nothing else.
so you think they may be useful but people just like to work harder? or perhps, they tried and saw no benefit at all and moved on?
Having been part of multiple projects introducing new software tools (not AI) to departments before, people are usually just stubborn and don’t want to change their ways, even if it enables a smoother work-flow with minimal training/practice. So yeah, basically people are so set in their ways,it is often hard to convince them something new will actually make their job easier.
The devil is in the details… what you describe screams to me what I call the “new boss syndrome”. New boss comes in and they feel the need to pee on everyone to mark their territory so they MUST bring in some genius change.
99% of the time, they are bringing in some forced change for the sake of change or something that worked on their previous place without taking into consideration the context.
I do not know anyone who prefers to work harder… either the changes proposed make no sense (or it’s too complex for people to understand the benefit) or the change is superfluous. That is usually where resistance to change comes from.
In all your software deployments did you blame the users for not getting it or did you redesign the software because it sucked (according to your users)?
I was one of the users, these are my observations with my colleagues reactions, and sometimes also myself.
That’s not what I’m asking. You designed or built something for some users. They didn’t like it, or didn’t use it as you expected. Was your response to change the software or blame the users for not using it correctly?
That depends on the issue. Sometimes it’s a lack of training, sometimes it’s obtuse software. That’s a call the product owner needs to make.
For something like AI, it does take some practice to learn what it’s good at and what it’s not good at. So there’s always going to be some amount of training needed before user complaints should be taken at face value. That’s true for most tools, I wouldn’t expect someone to jump in to my workflow and be productive, because many of the tools I use require a fair amount of learning to use properly. That doesn’t mean the tools are bad, it just means they’re complex.
I’ve occasionally been part of training hourly workers on software new to them. Having really, really detailed work instructions and walking through all the steps with themthe first time has helped me win over people who were initially really opposed to the products.
My experience with salaried workers has been they are more likely to try new software on their own, but if they don’t have much flexible time they usually choose to keep doing the established less efficient routine over investing one-time learning curve and setup time to start a new more efficient routine. Myself included - I have for many years been aware of software my employer provides that would reduce the time spent on regular tasks, but I know the learning curve and setup is in the dozens of hours, and I haven’t carved out time to do that.
So to answer the question, neither. The problem may be neither the software nor the users, but something else about the work environment.
Worth noting the average includes the people who did use it a lot too.
So you can conclude people basically did not use it at all.
I love that the only AI goal the oligarchy can focus on is making sure we can all use it to work more.
If you can be in three meetings at once with AI then every single one of those meetings could have been an email
Or a group chat
Yeah, no shit. But they nearly doubled the price. I canceled my membership, but I doubt enough did to actually matter.
I was fine paying $60 a year for Office. I was never gonna use the AI stuff. When they said it was $100, I bailed. So now they don’t get the $60. But enough people will go on paying that they will actually make more money on Office in the next year, not less.
Not enough people are willing to vote with their wallets or even their feet to effect any meaningful change. At least not when it comes to their tech toys.
Not enough people are willing to vote with their wallets
That and most governments are wrapped up in Windows, and therefore kinda just captive to the insane pricing. I get everything I need out of LibreOffice, personally.
The sole reason I still pay the Microsoft tax is Excel. Other office suite components are generally good enough to fill in for their Microsoft counterparts. But, spreadsheet programs are one area where open source competitors need to get their shit together.
Most of them can do the basics but Excel is still in a class by itself for power users and advanced functionality. That’s a real bummer because I would love to stop paying the Microsoft tax.
Its the vba. Its proprietary and not for sale anymore and theres not a good free replacement. Been writing a reporting system tha5needs scripting and have had to use javascrip amd heavily cover things for end users to even understamd what is happening.
I don’t see where a government would need a chatbot. Anyways, chances are that half the staff was already using some form of LLM before this trial.
Why wouldn’t they want one? If it’s a tool their employees want, they should provide it.
The point is that this is all happening in a cloud. One that is probably located in the US. Not a good thing for a non-US government to send potentially confidential or even secret data to.
It doesn’t have to, you can run LLMs locally. We do at my org, and we only have a few dozen people using it, and it’s running on relatively modest hardware (Mac Mini for smaller models, Mac Studio for larger models).
Yeah, shitty toy ones. This here is about productivity, not about a hobby. And not even real state-of-the-art models were able to actually give a productivity advantage.
Our self-hosted ones are quite good and get the job done. We use them a lot for research, and it seems to do a better job than most search engines. We also link it to internal docs and it works pretty well for that too.
If you run a smaller model at home because you have limited RAM, yeah, you’ll have less effective models. We can’t run the top models on our hardware, but we can run much larger models than most hobbyists. We’ve compared against the larger commercial models, and they work well, if little slowly.
Yeah… You do it, but do you think the UK government does?
My point is they could if they were concerned about data leaks.
Ugh, thought this could’ve referred to a Trial as in “All rise for the judge”, not Trial as in “Your free trial has expired”.
We’re way overdue to put AIs on former trials.
Pretty sure its main function is to back up your data to cloud fully accessible by microsloth
From reading the study, it seems like the workers didn’t even use it. Less than 2 queries per day? A third of participants used it once per week?
This is a study of resistance to change or of malicious compliance. Or maybe it’s a study of how people react when you’re obviously trying to take their jobs.
I don’t think it’s people being resistant to change I think it’s people understanding the technology isn’t useful. The tagline explains it best.
AI tech shows promise writing emails or summarizing meetings. Don’t bother with anything more complex
It’s a gimmick, not a fully fleshed out productivity tool, of course no one uses it. That’s like complaining that no one uses MS paint for the production of a high quality graphics.
The figures are the averages for the full trial period.
So it’s possible they were making more queries at the start of the trial, but then mostly stopped when if they found using Copilot was more a hindrance than a help.
I have a Copilot license at work. We also have an in house „ChatGPT clone“ - basically a private deployment of that model so that (hopefully) no input data gets used to train the models.
There are some usecases that are neat. E.g. we’re a multilingual team, so having it transcribe, translate (and summarize) a meeting so that it’s easier to finalize and check a protocol. Coming back from a vacation and just ask it summarize everything you missed for a specific area of your work (to get on track before just checking everything chronologically) can be nice, too.
Also we finetuned a model to assist us in writing and explaining code from a domain specific language with many strange quirks that we use for a tool and that has poor support from off the shelf LLMs.
But all of these cases have one thing in common: They do not replace the actual work and are things that will be checked anyways (even the code one, as we know there are still many flaws, but it’s usually great at explaining the code now - not so at writing it). It’s just a convenient method to check your own work - and LLM hallucinations will usually be caught anyway.
okay, but why did they used a guy beatboxing to illustrate their statement ?
“speeding up some tasks yet making others slower due to lower quality outputs”
So use it for the tasks that were made more efficient, and stop using it for the ones that slowed down or were low quality.
I mean… it’s software. It’s only as good as you leverage it.
Seeing a big uptake in use in the education sector. Teachers paying for their own ChatGPT pro license to lesson plan etc.
Can’t comment at this point if that’s right or wrong, you hope the teachers using it would identify hallucinations etc. But you can see there is already a change occurring.
Because they don’t know how to use it.
I work for the government and we’re trialing Copilot too.
Yesterday I gave copilot several legal documents and our departments long term goals and asked to analyse those documents and find opportunities, legal complications and a matrix of proposed actions.
In less than 5 minutes I have a great overview to start talks with local politicians. This would have taken me at least a day before AI.
I’ve show my coworkers some practical implementations of copilot and that was enough to kickstart the use.
If you’re composing the same mails a lot, for example, you can ask copilot to make a template text and then when you have to compose the same email again you ask copilot to compose and personalize the mail for you. That’s an awesome function.
I’ve made an agent that answers HR related questions of my team. This saves me and HR a lot of time and they are assured their questions are handled discrete.
So there’s this thing called drafts.