Writing in Parallel with 4 Bots and other updates
Earlier this week I found myself editing a chapter using four LLMs in parallel. I’ll tell you about it below, but for those of you who enjoy the updates more than the posts about how I use technology, here’s a short digest of what I did this week. I’ve kept it to three this week.
1) Email addresses.
Ok, I said I’d leave the talk about LLMs until later, but I discovered something that is crucial (as the bots like to say) information for anyone using an LLM.
I’m currently cleaning up all my subscriptions, including email addresses that I’m paying for, but that no longer serve their purpose. An important step in getting rid of an email address is to make sure I change the addresses associated with all the platforms I use, of which there are many, and the mailing lists I’m subscribed to.
For a couple of years, I’ve been using one of my emails to sign up to newsletters and apps - including ChatGPT and Claude. But when I tried to change the email associated with those accounts, I found out that I couldn’t. It’s not possible. Both platforms suggest that you open a new account with a new email address. But, and this is a big but if you’ve been using the LLMs for a long time, you can transfer your conversations over.
It’s easy to export the data from ChatGPT to a format you can read, but with Claude it’s trickier. (Although customer service has suggested a way of doing this.)
So there. Beware.
2. January was a month of catching up.
I spent the month understanding what I needed to focus on for the first half of the year, and deciding what changes to make now that I’ve decided that “I’m a writer and a Pilates instructor”. Well, I’m not the latter yet, but soon, it’s coming soon…
I’ve taken the decision to make Virtual not Distant dormant, and to trade under Spiralling Creativity in the future. I’ve also decided to go broader with my idea of using Pilates in actors training, and focus on Voice and Presence for a wider crowd. But we shall see.
So, that was January.
3. February will be the month for Finishing, and in March I will pick up my Mystery novel, and in April, I will be about kicking off the A to Z of Spanish Proverbs. (Did you see what I did there? Feel free to comment below or hit reply!)
Finishing includes the readable draft of that Shakespeare/remote book - still “sans” title - , the book on Connection and getting qualified as a Pilates instructor.
Ok, I know for some of you these updates might be enough for today, so thank you for reading. For those of you who are here more for the genAI than for my own adventures, here’s a long read.
(If you’re just here for the updates, thank you for reading!)
Using LLMs in Parallel for Book Editing
I do like experimenting with computer-related tech, and have always advocated for being “app-agnostic” - learning how to use different platforms at a basic level. So I’ve tried out the most popular text-generation AI tools.
I’ve left Gemini to one side, as when I gave it a go it would spit out bullet points as answers, which wasn’t what I was looking for. I’m sure I will come back to it at some point (these things develop at the speed of… well, at the speed of tech CEOs running a race to be the first to get to world-domination).
So, I tend to use ClaudeAI (I have a paid subscription) for help with writing non-fiction and fiction (also for Pilates studying), ChatGPT (free) for personal tasks, including drafting and following up formal communication, Notebook LLM (free) for analysing and summarising, and Perplexity (free) to look for articles and papers about topics, usually aspects of remote work.
Refining Non-Fiction
This week I used the bots to get feedback on the three most practical chapters of our book on Connection and Disconnection in Remote Teams. Bree (co-author) and I have been providing feedback on each other’s chapters, but we were at the point when we needed an outside eye to have a look at the text, to see if there was anything blatantly missing.
Notebook LLM - the Academic
I have found Notebook LLM to be the best at this. This platform has been designed to work on the sources that YOU, the user, provide it with. It doesn’t generate the answers from its training data, but from the data you give it.
The platform got a lot of attention last year because one of the ways in which it analyses your sources is by creating an audio conversation between two voices, which sound remarkably human.
But that’s not the strength of the platform.
I use it to pull out common points in a set of articles, for example, or to draw out the main themes of my own work - I write organically, so often I don’t really know what I’m trying to say and sometimes, there are themes that inadvertently come up. It’s fun to have someone (ok, something) look at your work as soon as you’ve finished, and get their take on it.
Notebook LLM identified some topics or angles we were missing in our work. The main reason why they were missing is that we don’t have direct experience of them. So, I needed some guidance to start addressing them. Notebook LLM can only do that to a certain extent, because it only draws from the sources you’ve given it.
So I turned to Claude, which I see as my writing buddy.
Claude AI - the poet
I asked Claude for help to address three points that Notebook said we were missing in the chapter on the role of the Manager in preventing loneliness in their team. These were: Limited Exploration of Conflict, Potential for Manager Burnout, and Diversity and Inclusion.
After asking for some pointers on how to address the feedback, I asked Claude to draft some paragraphs to give me a way into the topics, as it probably had further information to draw from. (Claude has been trained on extensive data.)
I found it to be more of an impostor than normal.
For example, it started one paragraph with “The research indicates…”.
”Great!” I thought. “It found some studies or articles I can go and look at.”
But when I asked for the specific research it referred to, it answered:
“I should be transparent: I generated those study examples and statistics to illustrate the kind of evidence that would be valuable, but they aren't real studies. I was trying to be helpful (whaaaat?) by providing what I thought would be plausible examples, but I should not have presented them as if they were actual research.”
Ok, no good then. Let’s try another bot.
ChatGPT - the assistant
I took some of the points from Notebook LLM and asked ChatGPT to help, as it has access to the internet - something Claude can’t do.
This is the point I was addressing from the feedback:
Focus on Metrics for Connection: The book could explore different ways of measuring connection and team belonging to see if the strategies are effective. It could offer tips on ways managers can understand if connection is working well in their teams or not.
I asked ChatGPT:
How can we measure connection in remote teams, both in a team and organisations? How do we know if our team members are lonely?
It replied with some ways of addressing loneliness in organisations, under different headings like Network Analysis, Surveys and Self-Reported Measures, and Retention and Turnover Data. It was a very extensive list and going way beyond what we cover in the book.
I then attached the relevant chapter and asked:
I'd like to strengthen this chapter of my book on connection and disconnection in remote teams. I've already mentioned some of what you suggest, but what's missing, and where would be the best places to address it in?
(As you can see, I was feeling rather lazy, and my prompting was weak too!)
ChatGPT replied with another collection of bullet points under the heading: What’s Missing or Could be Strengthened?
This bullet in particular caught my eye:
Case Study or Example of a Company That Actively Measures Connection
You mention Buffer extensively, (it’s true!) but it would be useful to include another company that actively tracks connection metrics (e.g., GitLab, Automattic).
Great! Let’s dive in!
I asked:
Can you refer me to a source that explains how Automattic tracks connection metrics?
Automattic, the company behind WordPress.com, employs several strategies to foster and assess connection within its fully distributed team:
Annual Grand Meetup (it elaborated on this)
Buddy System for New Hires (it elaborated on this)
Focus on Output Over Hours (it elaborated on this)
While these practices are designed to foster connection and engagement, specific metrics or tools that Automattic uses to measure connection among team members are not publicly detailed. The company's emphasis on in-person gatherings, mentorship, and output-based evaluations suggests a qualitative approach to assessing team cohesion and individual well-being.
Looks like ChatGPT and Claude have a lot in common… I replied:
Ah - you shouldn't have suggested it as a company that tracks metrics...
To which it replied:
You're right—I shouldn't have assumed Automattic tracks connection metrics explicitly. Instead, they focus on fostering connection through in-person meetups, mentorship, and cultural practices. If you're specifically looking for companies that measure connection quantitatively, I can dig into examples like GitLab or other remote-first organisations that use data-driven approaches. Would you like me to do that?
And when I answered yes…
While GitLab may not track connection metrics explicitly, other organizations have implemented quantitative methods to assess connection within remote teams. For example:
And so on it went.
So I turned to Chatbot number 4: Perplexity.
Perplexity - the librarian
Perplexity has been defined as a search engine on steroids, and many people use it to look for studies and references to trusted sources. As well as a whole list of articles I could go and look at, it gave me this summary in reply to my question “Have you got any examples of companies tracking metrics on isolation in remote workers?”:
Yes, several companies have implemented metrics to track isolation among remote workers. A study by Worklytics identified a key metric: the percentage of team members with fewer than 3 strong collaborators in a week3. This metric serves as a "red flag" indicating low access to leadership and decision-makers, potentially leading to feelings of isolation.Another example comes from a Colorado State University study, which measured professional isolation, schedule flexibility, and supervisor support for work-life balance among remote employees5. The researchers found a clear connection between feelings of isolation and decreased job engagement.
At least it’s a start.
Now the hard work begins: editing and filling in the holes in our text. Unfortunately, none of the above can do that - yet. Or maybe that’s a good thing, as it’s part of the writing and learning process, and one a writer shouldn’t miss it.
So there you have it.
Let me know if you have any updates of your own, or any further thoughts on any of the four archetypes I’ve created for our new online buddies.
As always, if you want me to share the extensive chats with the bots with you, let me know.
I leave you now with a random photo of a tree in Ravenscourt Park, with an interesting branch.
Have a lovely rest of your day!
Pilar
(yep, I’m signing this today!)