I have been creating useful content for Advent users and the financial services firms they work with for many years now. Part of creating the blog is therapeutic for me, part of it is a well-intentioned effort to foster goodwill by sharing the lessons I have learned with other users to reduce pain points, and some part of it is an effort to get the word out about what I do in case users find that they need someone with my unique skillset.

When ChatGPT and similar services arrived on the scene, I was somewhat concerned about what would happen if those engines sucked up the knowledgebase I have created and presented it to my audience as their own information. There are documented ways to discourage AI bots like robots.txt, Cloudflare AI Crawl, WAF blocking and CMS-specific settings. I know this, but early on I made a choice to allow it, rather than fight it.
Initially, I remained most worried that my knowledge would be presented without credit to where it came from, but this past week I realized that there is another issue altogether. In the process of troubleshooting an Advent Software use case, I queried Perplexity. I was rewarded with a page of summary information that cited Advent, AdventGuru and 13 other sources. Some of those sources were relevant, most were not.
As I drilled down on the problem, most of the sources cited were in fact me. So here I was querying Perplexity for assistance, and it was attempting to assist me in troubleshooting the issue using information I provided. Some part of this makes sense and could be helpful if I were losing my faculties or wanted to query my own digital footprint related to the issue. Neither of these apply.
I wound up resolving the issue with the user and their IT consultant in less than an hour with no part of the credit for doing it attributable to using Perplexity with its “best” model. Our solution was collaborative. The client arranged a Teams meeting with me, and their IT consultant. The three of us worked together to try a few things. Eventually, we found a solution as a result of us all working together – not an AI query. We solved it because we found the time to have a meeting and made that a priority.
In the process of writing this, I ran the same query on Opus 4.6 and GPT-5.4. The results were very similar. My blog and other online sources of current and historical information are the data that empowers these engines to respond to practical and esoteric questions with anything relevant beyond their training data. However, as I read though their responses to the query, it became clear that I was heavily cited without any solution being provided to the problem.
While I am flattered that my subject matter expertise is held in such high esteem by AI inference engines, I am concerned that when AI models attempting to utilize what I have written – citing me repeatedly throughout their response as a source of information – fail to provide a solution that reflects poorly on me. In the particular case we resolved, I am almost 100% certain that a solution does not currently exist online. I have already written a separate blog post detailing the problem and its solution, but now the question for me is, do I put that solution up on my blog?
By doing so, I continue to provide access to users that have relied on me as a source of information that may not otherwise be discovered, documented or publicly available, but I am also empowering AI inference to parody my expertise in more meaningful ways that may make users think their favorite AI chatbot is a substitute for getting knowledge directly from the source. The latter is problematic because written works contain meaning and nuance that are lost when information is taken selectively and presented out of context.
Chatbots cannot be trusted to provide the best possible answer – only the best possible answer based on their training, parameters, capabilities, available data, and the prompts we use to query them. My blog posts are representative breadcrumbs of my experience that I have chosen to share. In this case, the chatbots reviewed my blogs to determine if and how something can be done, asserted that it could not be done, and then provided instructions on possible workarounds, but there was a solution that could be found without the assistance of a chatbot all along and it makes a lot more sense than the workarounds that were recommended by Perplexity and its cohorts.
About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide.
For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202

