Getting your head round the curveballs AI for collaboration will lob into your teams
Now it gets interesting. How will GenAI disrupt you and your teams?
Â
[8 minute read]
Today it gets interesting. What issues is the arrival of AI raising for leaders of knowledge workers?
Â
Quick recap!Â
Day 1: I talked you through the core AI technology relating to collaboration and how it works to help you feel completely confident talking about it.Â
Â
Day 2: We walked through the 9 collaboration areas on my radar right now for AI to potentially transform.
Â
Today: Iâll explore the 10 key questions marks that this vast technology opportunity will raise for workplace culture, norms and behaviours so you feel confident to raise and debate them in your own organisation.
Â
Day 3: Weâll look at the specific skills your teams will need to develop in order to adapt to the arrival of AI.
Â
Day 5: Iâll talk you through the main decisions youâre going to need to make, and we can start to develop your thinking and form a simple plan.
Â
Â
As we walk down this corridor of opportunity, what challenges are we going to face as leaders? Today, letâs identify them. Over the next two days, weâll look at how to handle them.
Â
First, let's just remind ourselves of the opportunities
Before we dive straight into these challenges, letâs just take a moment to reflect on the opportunities for how you collaborate.
- Speed. AI takes some of the tasks we used to do manually and completes them at âmachine speedâ.
- Scalability. AI doesnât need a break or holiday and there are few limits to the volume it can handle. For example, it can schedule or reschedule 1000s of meetings to optimise hundreds of schedules.
- Democratisation. If Einstein is joining every team, suddenly everyone has an opportunity to be creative, expert and decisive... and more. Can we all suddenly work outside our experience, skillset and domain expertise?Â
- Personalisation. AI can handle multiple peopleâs requirements and constraints, instantly optimising rather than satisficing (which is when humans trade off time/effort with optimisation, settling on a âgood enoughâ option in the time available). It can do this at vast scale and precision (think about how Spotify uses AI to create personalised playlists for every one of its millions of users). Likewise, AI can create bespoke content for everyone e.g. each employee has an entirely personalised company handbook or intranet.
- Embodiment. A collection of AI capability can be given a persona of its own, either virtually (like a virtual assistant with a name) or physically as a robot, or cobot, creating new âcolleaguesâ for us to collaborate with.
Â
Now, we'll explore the challenges...
Ok, now letâs examine the issues. This is my non-exhaustive and inevitably incorrect summary of the kinds of things we need to be thinking about as leaders - specifically as AI relates to how we meet and collaborate.
Â
1. Dehumanisation -Â what can be AI-ified and what must remain human?
Let's tackle the most obvious issue head-on. To see and be seen is a uniquely human need which AI can never truly fulfil. As AI takes over more tasks, we might find ourselves spending more time interacting with machines than with our colleagues. This shift risks creating a sense of isolation and detachment among employees, much like the effect lockdowns had on face-to-face interactions.
Human connection isn't just a nice-to-have; it's the cornerstone of organisational culture. When employees feel disconnected, it impacts their engagement, motivation, and overall contribution to a positive work environment. In a workplace where interactions are increasingly mediated by AI, employees may feel undervalued, which can erode their sense of belonging and commitment.
Trust is fundamental to effective collaboration. When AI dominates decision-making or task execution, it can undermine trust in human judgment and expertise. Authenticity + facts = trust. This equation is the foundation of collaboration. As new technologies more faithfully create in-person, live or human generated interactions, those three things will inevitably change. What will your organisation gain and what will you lose?
Human interaction fuels creativity and innovation by fostering idea sharing and collaboration. However, as interactions become more automated, there's a risk of losing the spontaneity and diversity of perspectives that drive breakthrough ideas.
AI may excel at analysing data, but it lacks emotional intelligence and empathy. This can be frustrating for employees, particularly in situations requiring understanding and sensitivity.
The question will be: what can be brilliantly achieved through AI (not undermining but maximising the value of precious human connections, time and energy)? This will be a design thinking problem for every organisation.Â
Â
2. Responsibility gap - is this the end of accountability?
If an algorithm is taking care of a set of tasks, or if a piece of work is generated by AI, weâre going to get some blurry lines around accountability. Yes, absolutely we need FAT ML (Fairness, Accountability and Transparency in Machine Learning) but we also need to make sure that humans in our business understand and can explain the decisions and content AI is generating. Where do our responsibilities lie? Where does our accountability start and finish? Stephen Bush writes a great article on this.
Â
3. Gear shift - can we hack the new running pace?
When the simpler and more formulaic tasks are handled by AI, humans will manage higher level tasks. This will make roles potentially both more interesting... and more stressful. People will find themselves operating in a higher gear, more of the time. And the organisation will demand this. The investment in AI needs to pay back somehow - more work, higher quality work. Most people can't work in a high gear all the time without rest and reset. Without those lower level tasks that naturally allow us to operate in a lower gear interspersed through the day, we will need to rethink how we rest and reset at work.
Â
On the flip side, Adam Zuckerberg, PhD, ponders âI've heard the argument before that AI will basically take the lower level knowledge work, and free up humans to focus more on higher level stuff, but the truth is, I don't know if we're up to it. If we're honest, most knowledge workers - even doctors and lawyers - do pretty low-level thinking most of the time.â
Â
4. Hallucinations - what about when AI gets weird?
Hallucination are answers generated by GenAI that sound real and plausible but are in fact not true or accurate.
The reality is, this wave of technology is not like what has gone before it. It is âweirdâ (thank you Ethan Mollick) in new and disruptive ways. Alexandre Eisenchteter believes that hallucinations are central to LLMs and that âthe best use cases emerge when hallucinations are seen as a feature rather than a bugâ acting as âa powerful springboard for robust innovative concepts.â
Nevertheless, weâll need to figure out how to spot and handle hallucinations - something weâve never really encountered before. Remember - GenAI is just a text prediction engine. There is no rationale or thought process behind what it provides. Itâs just literally, what is the next most likely combination of words? The advice is âcheck everythingâ, somewhat defeating the time savings of generating our own content.
How will people fact check and handle hallucinations?Â
Â
5. Transparency and attribution. "Who created that? Itâs complicated...â
How will we know what was generated by AI, and what came from a human? What about when a human edits something created by AI? Do we need to know the attribution of work?
How platforms and tools convey this will influence tech choice.
Stormz founder Alexandre Eisenchteter is doing pioneering work in his own platform to make transparency front and centre and also developing a set of best practices to guide the industry. e.g. in Stormz, an idea generated by AI is tagged with a âbotâ emoji. But will that signposting itself create bias - e.g. we automatically elevate or indeed discount something we know has come from AI?
How people have operated as knowledge workers to date will not been conducive to GenAI's capabilities via copilots. To be able to access and use the vast internal data and intelligence in organisations, we will need a big shift to working out loud - making meeting summaries available, sharing documents and other work in progress.
Our work product is territorial. "I made these slides, they're mine". For GenAI to work its magic and reduce the sheer volume of conversations internally, we will need a mindset shift to working 'open source'. And given that work is tribal and our value is attributed to our work product, this is not going to be an easy step for most organisations.
Â
6. Surveillance - big brother is watching (also recording, transcribing and storing in perpetuity)
Everything is now âon the recordâ and, oh wow, this is going to have some big implications! If every meeting is recorded and transcribed AND content is easily retrievable, suddenly weâre looking at a range of adaptive behaviours:
- Speaking less (too worried about saying the wrong thing)
- Speaking more (keen to be seen to be a valuable contributor)
- Speaking weirdly (one of my interviewees described how people had got in the habit of taking excerpts from transcripts to their performance reviews to demonstrate X or Y behavioural competency. She explained this led to many unusual contributions in meetings that were clearly âfor the recordâ)
And so on. Itâs a big change.
Contrast this with copilots which will feel like a new, private channel for asking what we donât feel safe enough to ask out loud. Things we donât feel safe to ask directly in a meeting will be driven âundergroundâ into the meeting copilot.
- Is Steve happy with my proposal?
- What does that acronym mean?
- How do I [do that thing I feel I should already know]?
Like Google, weâll tell it our deepest secrets - but can it be read by others? Can it be used in our performance review? Could it be brought up in a tribunal..?
And what does this do for trust? One of the foundations of my own meeting and collaboration approach is candour. I want people to build the trust and strength of relationship to ask key questions out loud. What if there is an easier way to ask - privately to your copilot? And then we see less and less candour, making it harder and harder to be candid.
Â
Â
7. Role and team transformation - dude, whereâs my team?
AI will confront peopleâs role, status and value. Roles will be reshaped which will suit some people, making their jobs more interesting, but it will threaten others as tasks their expertise was previous invaluable for will be quickly completed by AI.
But the more transformation issue is the way that teams will form and work together. If AI has just hired Einsteins onto your team, do you even need the same people? With 100 PAs and 100 project managers and 100 specialists round the table now, what will this mean for what we currently call a team? Maybe very little, maybe a huge amount - we just donât know.
Thereâs a reason that AI is being used so well by individuals - and thatâs because it allows them to âbe their own teamâ. I think we will see a lot more of this in the future.
Â
8. All the -isms - will AI help or hinder diversity and inclusion?
Will AI level the playing field or double down on inequalities in how we collaborate? Can it support neurodiversity or will it exclude some people?
You may well have heard of the âracist soap dispenserâ present in one very famous tech office. If the soap dispenser is racist, what hope is there for collaborative AI?
And⌠might AI also be the great leveller? Where humans are biased, AI is neutral. Every voice is equal in a transcript. Every insight is equally valuable to AI. Every idea is accurately attributed.
We donât know the impact of AI for collaboration on DEI but it needs to be front and centre of our thinking:
- Will AI accurately capture and attribute ideas to people in meetings?
- Can AI be a facilitator for delivery of reasonable adjustments for neurodivergent people, anyone with hearing or sight loss or any other kind of disability that affects our ability to process information and collaborate with others?
- Can AI make content more not less accessible?
- What biases need to be trained out of or corrected for in algorithms?
Â
9. Agentic AI - âhang on, all our AI-assistants just went to a meeting together!â
First, your AI agent will help you.
Then your AI agent will interact with someone elseâs AI agent.
But what about when multiple AI Agents interact semi-autonomously?
The results may be⌠unpredictable. We donât have a playbook for that yet!
Â
10. Generation gaps - will the elastic snap?
For those in your workforce in their 60s, mobile phones, the internet and email only arrived part way through their career. When this generation started work, typewriters and smoking were still very much a part of office life.
Meet the new generation who may have never worked in a business with an office. Or who may never have met anyone they worked with ever. Wow, quite the divide.
Enter AI. How will you skill these generations in the way they each need without widening this divide? How will you build trustful, high performing teams which collaborate using AI, despite massive differences in mindset and skill?
Â
11-14. And a few more thoughts to get your head roundâŚ
And Iâm just going to bundle up a few more to get you thinking. All worthy of full expansion in their own right.
Â
- There are many ethical question marks about the use of AI. But from a collaboration point of view, to what extent will your employees feel confident that you as their employer will leverage AI ethically and responsibly?
- How will you manage the accuracy, security and intellectual property around generated content?
- How will you ensure decisions or recommendations created by AI are explainable? GDPR gives individual customers the right to an explanation for a decision made by an algorithm (e.g. a credit limited granted). Will the same explanations need to be accessible for decisions or actions created by AI affecting people within an organisation?
- Bad actors. What will stop employees outsourcing elements of their job via stealth IT? Can AI be used to mask performance or trust issues?Â
Â