My Experiments with AI and What is Next

About a decade ago, I began my formal study of Artificial Intelligence and its implications for faith, life, and learning. I interviewed computer scientists, authors, and leaders of AI research labs. I read books and journal articles. I took notes and grappled to identify themes and essential patterns. It was a fascinating journey and helpful in building a foundation from which to begin my later experimentation.
Along the way, having done confidential consulting for education companies over the years (mainly providing a thought partnership on new product development), I did a small amount of consulting on early adaptive learning education products that rely upon machine learning and artificial intelligence. To my surprise, I also got invited to private meetings and confidential conferences on related topics, sometimes hosted by some of the oldest, largest, and leading voices in modern computing. I’m still baffled, for example, at getting what I thought was a mistaken invitation to a private conference that consisted of CEOs, heads of Research and Development units, Chief Technology Officers, and Chief Innovation Officers from some of the leading brands in the computer industry, along with many of the up-and-coming companies. A few hundred of us gathered at a five-star hotel in Florida. Instead of standard seating, we were each given a large white leather recliner in the main meeting area. To this day, I have no idea why I got invited, but the deep thinking and rich conversations certainly helped in my learning journey about the future of AI.
Early Experimentation
Then, consistent with how I tend to explore new topics, I reached a point when I wanted to learn more through experimentation. While I lacked access to a research team and advanced expertise to conduct some of the experiments floating around in my head, I could begin simple experiments when I gained public beta access to GPT-base chatbots around 2022. It was an intriguing couple of years, initially using it as an interactive and advanced search engine.
I wanted to test its capabilities. One early experiment involved creating a new blog (no longer public) and developing six distinct AI personas to serve as authors, each with a fictional profile and a disclaimer that all content on the blog is AI-generated with human editing. I played the role of senior editor and may have enjoyed being the stereotypical hard-nosed writing critic a bit too much. Each persona/author had a weekly quota and pitched ideas and articles to me. I would review them, give them feedback, do developmental editing, and then they could submit full articles. Within a month, the blog had over a hundred live articles, and traffic was beginning to pick up. After about three months, I shut down the experiment.
As much as it helped me learn about the limits and capabilities of this emerging technology, I also found something missing from the writing. It lacked the context of combining lived experience with information and knowledge. An AI persona can pretend to have a personal experience but cannot create something informed by a life of joys, challenges, and lessons. In the Lutheran tradition, there is a classic work by C.F.W. Walther about the biblical teachings of law and gospel. In that text, Walther notes that the concept of law and gospel is simple enough for a child to grasp but can only truly be taught by the Holy Spirit in the school of experience. This is what I sensed with the AI personas. They lacked the authenticity and texture that comes from the school of lived experience.
Switching Roles
So, in the last few months of 2024, I decided to switch roles for the next set of experiments. Since writing is one of my greatest joys and passions (and I already write daily for many purposes where I don’t use generative AI), it would not be fulfilling to edit the dry writing of an AI persona. As such, this time, I would be the writer in this experiment, and AI could be the editor. Here is how it worked. I would write complete first drafts. Then, I could use AI as an editor for my work. Simultaneously, I could experiment with a form of content creation through transcription that has been used for many decades by authors and leaders. I could record myself speaking about a topic and then have AI transcribe it.
I decided to form three different Substack publications, each focused on a different theme. One, the Moonshot Institute, would focus on more formal research reports, concept papers, and white papers on futures and innovation in education. The Bull Pen (the source of the article you are reading right now) would serve as a broader personal exploration of faith, life, learning, and leadership. Then, a third Substack would focus on what it takes to be a faithful and flourishing Christian school. The last one was good timing, as I had written an entire book manuscript and submitted it to the publisher a few months earlier. So, the Substack would be a way to dive into more nuanced and detailed topics that build off the main ideas I wrote about in the book, fostering a community around people passionate about creating excellent Christian schools.
Unlike the first experiment, this one focused on supporting and amplifying my original thoughts and writing. I wanted this experiment to help me better understand if or how AI can allow one’s creative process to be deeply human but enhanced, akin to how a person might use glasses, a microscope, or a telescope to expand upon their natural visual capacity. As an author, when you submit a book to a publisher, they have multiple people review and edit your work. It is still your voice, style, ideas, and content. Yet, the editors help improve it, from line editing and proofreading to sometimes even venturing into developmental editing. Developmental editing goes beyond basic editing to writing style and organization, strengthening arguments and word choice suggestions, and ensuring that the book engages and serves the target audience.
Many people today use AI for writing, even if they don’t recognize it as such. Grammarly is a product that is even recommended for students at many high schools and universities to aid in improving their writing. Yet, it is more than the spellchecker and grammar check of a decade ago. It offers proofreading, line editing, and some copy editing…but does not get into developmental editing. For that, one needs a skilled human editor. So, to use AI to assist with these tasks is not a large stretch beyond past practice.
What about creating papers based on an audio recording? Suppose I’m on a 45-minute drive, so I hit the record button and talk about a subject I’ve been studying for the past few weeks or months. I take the recording, drop it into a chatbot, and have it transcribe what I wrote. In the process, the chatbot makes decisions about grammar and punctuation, even venturing into all levels of editing, including developmental. Before you know it, what is in writing reflects the combination of the author’s style and the stylistic tendencies of the chatbot. If the writing topic is a more formal report or white paper, distinguishing the line between AI and humans becomes even more challenging.
It is not ghostwriting. It is somewhat akin to a business practice from the late 19th century through the middle of the 20th century, when a manager would speak, and an administrative assistant would use shorthand to record the message, refine it, and then run it past the manager before sending it on the manager’s behalf. Then, in the 1970s, the practice evolved with recording devices, where the administrative assistant would draft memos and documents based on what the manager recorded. The editorial work varied from one manager and administrative assistant to another. Still, the general goal was to help the manager increase productivity while getting polished and professional written work that reflects the manager's intent.
So far, this experiment has taught me lessons and posed questions that are highly relevant to the future of writing and various creative processes. Here are some of my present observations and lessons.
Much Good Writing is Not Solitary
Writing is rarely a solitary activity. Especially in the world of publishing, most of the best writers don’t do it alone. I certainly don’t consider myself one of the best writers. Still, I ran one of the most visited education blogs on the Internet for over a decade, publishing four or more weekly articles and garnering millions of readers a year. I did it by publishing entirely rough draft articles. I didn’t even use a spellchecker. People got the raw, unedited, first-draft thoughts of Bernard Bull.
It was embarrassing to go back to some of those articles and find as many as fifty typos in a 2000-word article that I discovered was read by some of the most influential leaders of our generation. Yet, somehow, it worked and resonated with readers. That is not how it usually works. Typically, there is some measure of editorial review in publications of that reach. Today, with the democratization of content sharing, there are tools available to provide most of the resources that countless CEOs, thought leaders, and business leaders have used for decades.
The Value of a Good Editor
Editors are excellent and valuable parts of published writing. They also have biases and preferences. I’ve published multiple books and written for newspapers and other publishers. As such, I’ve worked with dozens of editors, each influencing my writing and work differently. When writing for one of the top newspaper publishers in the nation many years ago, the editor almost tried to rewrite my entire first draft. I pushed back and negotiated, and we were ultimately happy with the result. However, what was published under my name reflects the style and influence of that editor, even in some instances of word choice, metaphors used, and more. So, as more people choose to use AI as an editorial partner, you can expect to find similar complexities.
You Must Determine Boundaries and Enforce Them
AI editors can be intrusive. I’ve learned that experimenting with AI editors is not what I expected, especially when working with transcribed content. With ChatGPT, for example, I’m still learning how to use prompts that clarify my needs and expectations of the editorial process. The bot will sometimes impose more of itself than you may want—if not specified.
If someone wants AI to be a ghostwriter of articles for them, I suppose it is not a problem. Yet, for those of us who aspire to be the captain of our own writing ships, to have AI aid and support what we value as a fundamentally human, personal, and creative expression of our best thinking—that requires developing new skills, and we are all at different stages in that learning process.
Some will frame this as a fundamentally ethical issue. That is often the default lens for those in education worried about plagiarism and academic integrity. Of course, charting a path that reflects what is morally good is incredibly important, along with operating within the ethical boundaries of a given field or community. I’ve researched and written on that subject for decades.
We are also wise to be open to the fact that the ethical rules of writing are not all moral absolutes. Were all of the great philosophers of antiquity plagiarizing because they didn’t use appropriately cited sources when alluding to the work of others? What about all the presidents who used speechwriters? How about the countless CEOs whose marketing team provides outlines and rough drafts—or sometimes writes the memo only to be approved by the CEO? How about the many best-selling books of the past century that an unknown editor or ghostwriter co-authored? Or, how about the college students who received detailed developmental editing at a writing center compared to those who wrote and submitted all their papers without an editorial review? Should they be graded on the same scale and standard?
As we venture into an age of AI-supported human efforts, we must ask critical ethical questions. Yet, we can’t use the rules of one game to judge what is happening in a new and different game. The ethics and rules for writing vary by context in the modern world, even as there are moral absolutes that can and should guide us across contexts. For example, when I’m working on a book-length manuscript and plan to work with a given publisher, even the individual publisher will have specific contractual expectations about one’s writing process. With my most recent book, the publisher included a specific restriction from using AI in the writing process. So, there is undoubtedly an ethical, and sometimes a legal, implication. Know the rules of the context and follow them. Or, if they conflict with your own code of ethics, you can choose not to participate.
Schools are struggling with this. Some schools opt for detailed and school-wide policies about what rules they will enforce regarding AI. Others opt for a broad and general expectation about doing original work, always citing sources, and not plagiarizing. However, they recognize and leave room for class and teacher-specific instructions for when or if AI is allowed. I can see wisdom in both approaches and a hybrid of the two. Yet, given that we are preparing people for a world where each of us will be moving between contexts with different sets of rules, perhaps it is helpful to give students some practice with that while in school.
Keeping it Human, and Maybe Even Making it More Human
Despite its promise, AI cannot tell an authentic first-person story of a lived human experience. The technology can make up stories, but if you ask a chatbot to write a love letter to someone, it is not capable of meaning what it writes. That matters. I also think this may give us an important starting point in deciding how to use technology in the future.
Chatbots are changing how I write. I was horrified when I recently wrote an article in the Notes app on my iPad, free from using any AI for editing. I dropped it in Grammarly for quick proofreading and then posted it online. Later, I ran the article through one of the many emerging tools to detect if AI wrote an article. It came back with 99% certainty that my article was AI-generated. How is that possible? Is it that I’ve been using AI to provide editing, and now it is starting to make me in its image—teaching me to write according to its stylistic preferences? Does my typical writing style fit the chatbot's stylistic programming?
I also recognize that I have several writing styles. When I’m in the mode of writing a formal research paper, I take on a very different writing persona than if I’m writing a letter to my kids, or even just writing a semi-informal article like this one. How did I react when I saw that my entirely human-written article was listed as likely to be AI-generated? I pulled it offline and rewrote it, running it through an AI detector until it perceived the writing as human (or potentially human according to its vernacular). As a result of such experiences, I’ve found myself trying to be more informal, use more human writing “mannerisms,” and not sound like something AI could write. Who knows, maybe this is leading to worse writing.
Keeping it Transparent
Because of my experimentation, specifically with Substack articles, I’ve added a disclaimer at the bottom of each article and on the About page. That disclaimer describes precisely how I do or do not use AI in the writing process. It further notes that I will disclose, in writing, if I am using AI in any way that exceeds basic editing or background research. By doing this, I hope to invite readers into an ongoing conversation about the ethics of AI while encouraging each of us to be more transparent and intentional in our choices. In this time of technological transition with AI, I suspect such a practice may be helpful.
The New Experiment
Yet, much of what I write is a process of sharing and connecting with people. I am trying to build authentic relationships through my writing, even as it is a way to refine my thinking and invite people into an ongoing conversation. Who wants to go out for a cup of coffee with a robot? Not only is it important that what I publish is authentically me, but it is also an important aspect of building a genuine and human connection with the readers.
This leads to my newest experiment. I want to determine how to invest my time and energy in forms of writing and creation that AI cannot do. Such an experiment likely means exploring different writing styles, focusing on lived experiences and examples, connecting disparate sources that AI would rarely put together, breaking conventional writing rules, more deeply integrating my core convictions, paying attention to nuances and competing perspectives, highlighting insights and ideas unlikely to be in a digital database, and coining new terminology, metaphors, and examples. I’m sure my list will change as I venture further into this experiment. As I see it, what matters most is clarifying what it means to be human and understanding a positive path forward in human-AI interaction.
By the way, I ran this article through one of the popular AI-detention tools, and it is “highly confident that this text is entirely human” with a 2% probability that it is AI-generated. So, I guess I’m 98% of the way to my goal, at least for this article.
Disclaimer: Do you use AI to write the articles on Substack? The ethical use of AI is an important topic. When new technologies emerge, they often evolve faster than our ability to make sense of the ethical implications. As such, I offer this disclaimer to provide a transparent picture of my own journey and approach. I’ve already made mistakes, even embarrassing ones, but I will strive to quickly learn from them and provide a transparent view of my present approach. As such, this disclaimer will be updated over time.
The full initial draft (in writing or as an audio dictation), words, and ideas for my Substack articles always come from me. From there, I often use AI for editing Substack articles. I regularly use Grammarly and/or Microsoft Word’s built-in Spellcheck or Grammar Check (both of which are a form of AI) to aid in proofreading and editing my work on Substack. In instances where I use AI for something other than background research or editing my original work, you can expect that I will cite or note it in the article.
I also regularly use DALL-E to generate the images for many articles. In addition, I sometimes use royalty free images. If credit is required by law, requested by the creator, or simply the courteous thing to do, you can expect to see the credits right below the image.
I continue to evolve in my experimentation with the use of ChatGPT, Grok, CoPilot (and various other ChatBot technologies) to serve as an editor for my Substack publications.
What does this mean? There are three common scenarios, though I hope to experiment with others in the future (and I will update this accordingly):
I write a full first draft in Word, Grammarly, or a word processor, and then submit it to the ChatBot, asking it to serve as an editor, akin to how I have one or more people edit almost anything that is published in my formal capacity. This is also similar to how editors review my manuscripts when they are submitted to a journal, newspaper, or book publisher. By the way, when I write for any of these partners, I never use AI beyond the basic spellcheck / grammar check available in Microsoft Word—not even to use and then cite it.
I record myself speaking on a topic and then place the recording in a ChatBot to transcribe, remove disfluencies, and provide a draft transcript that I can refine before publishing it. This is where I’ve made the most past mistakes. Because the ChatBot is transcribing, it adds its own grammatical interpretations and even takes liberty with sub-titles, organization, corrections, and adding clarifying language. As such, I’m still learning to use prompts that ensure my words, voice, style, and intent dominate—while also achieving a quality, personal, but streamlined approach to sharing ideas. Because this is an evolving practice for me, and also because it sometimes creates a final draft that can be flagged as AI-generated content, expect that when I use this approach, it will be noted at the beginning or end of the article.
I use ChatBots to conduct background research related to topics that I’m writing about, akin to an interactive and advanced search engine. If there are quotes or unique ideas that I include in the article, you can expect that I will give some sort of citation or in-text credit.