Why I Use AI in My Editing Work (Sometimes)
If you’ve read my AI Use Policy, you know I take a thoughtful, transparent approach to using AI in my editing business.
When I first started drafting that policy, I kept backing up. I need to add this! I need to explain that! I need to say, yes! I get it! There are some real negatives in the LLM origin story!!
My original policy was five pages long.
Five. Pages.
Or, in editor-speak, 2000 words. (Which is eight, EIGHT manuscript pages!!)
Full disclosure: I used Microsoft Copilot to cut that sucker down to a lean, mean 391 words. With headings. And bullet points. And all em-dashes and rhetorical questions revised right on out of there (along with an additional 55 words).
But I kept wondering, does it help you feel more confident in me if I explain why I’ve chosen to use AI? So I turned to Copilot and said, hey! Can you take what’s left of my 1600-ish words and turn it into a blog post? And then I started revising.
I love technology
I do, I really do. I love seeing where I started and where I am now thanks to tech.
I grew up with rotary phones and paper letters. I earned a computer science degree when women were even more rare in the field than now. I helped design portions of Microsoft Outlook 98 before my family knew what email was. I’ve watched tech evolve from joking that “this world wide web thing might stick around” the first time we saw www.toyota.com in a commercial to “AI is everywhere.”
So when AI started popping up in my editing circles, I didn’t panic. But I didn’t jump in blindly either.
I love sci-fi
When I was ten, I read The Girl with the Silver Eyes (still in print, y’all!) and became a lifelong sci-fi fan. I love the dichotomy between the positivity of future tech making our lives infinitely better and the negativity of tech being used to control humanity.
When I look at where I came from, I feel like I’m living in the future that those old stories imagined.
So I had to try AI
At first, I was skeptical. AI felt like another wave of tech panic, like when PCs were going to replace secretaries (they didn’t) or GPS was going to make us all forget how to read paper maps (it did).
I didn’t know what to make of it. I’m a freelancer working out of my house; I don’t exactly have close colleagues to ask their advice. Many of the editing forums I’m in are full of editors and writers decrying the use of AI.
But I didn’t feel comfortable ignoring AI and refusing to use it. That felt antithetical to my sci-fi and tech-loving self.
I knew I couldn’t form a real opinion without trying it, even if I had no clue how to get started. So I got started.
I used Claude to help with a developmental edit of a complex sci-fi manuscript. I used Copilot in Word to assist with line editing for a writer whose first language isn’t English. I used Grok to research outdoor kitchen appliances for a backyard remodel. I took an introductory “AI for Editors” course that started with basic vocabulary and ended with a look at the ethics of AI use.
What I Learned
I worked best with AI when I started thinking of it as a brilliant assistant who’s so new they’re still shiny.
Just like with that shiny assistant, AI can help with simple, tedious, or rules-based tasks like formatting, fact-checking, and brainstorming.
But only if I can provide clear instructions. If I can’t explain what I want, AI flounders. If I don’t ask good iterative questions, AI can’t fix what it produces. AI needs strong guardrails and constant verification to help it produce good results.
Why I Created a Policy
I want to be clear with my clients about how and when I use AI. Writers are responding to AI in wildly different ways, from “never ever ever” to “can it replace you, the editor?” I respect all those feelings. My goal is to meet each client where they are, even if that means NOT using AI for any portion of our project together.
That’s why my policy prioritizes:
Client consent: I’ll always ask how you feel about AI and adjust my workflow accordingly.
Transparency: I disclose when and how I use AI, and I answer three key questions: why AI was used, what AI did, and what I did.
Accountability: I stay in control of the editing process and verify all AI output.
Risk mitigation: I choose tools with strict data security policies and opt-out options for using material for training.
Looking Ahead
I don’t think AI is going away. But I do think we can use it to handle tedious, rules-based tasks that free us up to do what humans do best: creatively solve problems.
If you’re curious, cautious, or somewhere in between, I’d love to talk about how, or if, AI fits into your project.
You do you. I’ll meet you there.