Our Methodology: How Nex Tools Researches, Writes, and Verifies
Nex Tools publishes content in a domain - spiritual and metaphysical frameworks - where the line between useful insight and pseudoscience is easy to blur. Much of the content ecosystem in this space lacks citation discipline, repeats received wisdom without attribution, or makes medical and psychological claims without evidence. We chose a different approach. This page documents exactly how we research, write, verify, and update the content on mynextools.com and its sister properties, in the spirit of editorial transparency that the broader spiritual content industry often lacks.
The methodology below governs all new content published as of 2026. Earlier content (pre-2025) was produced under a less rigorous process and is being progressively upgraded to current standards during our 2026 editorial review cycle. Pages updated under current methodology are marked with a current "Updated" date stamp.
The Five-Stage Content Pipeline
Every piece of long-form content on Nex Tools passes through five stages before publication. The stages are sequential, not parallel - a piece cannot skip ahead to drafting without completing research, and cannot skip editorial review before publication.
Stage 1: Topic Validation
Before we invest research time in a topic, we validate three things: (1) that there is genuine user demand for the content, (2) that we have primary sources we can reference, and (3) that the topic fits within our editorial scope of spiritual and consciousness tools rather than straying into areas where we lack expertise (medical advice, financial planning, legal guidance).
Topic validation uses keyword research data, question aggregators like Reddit and Quora, and our own analytics on which existing pages generate follow-up questions. A topic passes validation when the user demand is clear, the source material is available, and we can credibly write about it within our editorial scope.
Stage 2: Primary Source Research
For every topic, we identify the primary sources before drafting begins. Primary sources are the original scholars, practitioners, or researchers who developed the concept - not secondary summaries, not Wikipedia, not other content sites. For Human Design topics, Ra Uru Hu's original work and Jovian Archive documentation. For Jungian concepts, Jung's own writings. For manifestation research, the original peer-reviewed studies (Matthews, Kappes and Oettingen, Emmons). For astrology, Steven Forrest, Liz Greene, Demetra George, and other working astrologers who have published widely.
We maintain a growing internal bibliography of primary sources organized by topic. When we publish a claim, we know which source it came from and can cite it directly. When a primary source is not available for a specific claim, we either find one or do not make the claim.
Stage 3: AI-Assisted Drafting
We are transparent about using AI (large language models) as a drafting tool. The AI receives the researched source material, topic outline, brand voice guidelines, and formatting requirements. The AI produces an initial draft that a human editor then reviews, restructures, corrects, and expands. The AI does not select sources, does not make editorial decisions about what to include or exclude, and does not publish content directly.
Our position on AI assistance: the tool produces drafts faster than pure human writing, but the editorial judgment about what the content says, what sources it cites, what claims it makes, and how it engages the reader remains a human responsibility. AI without editorial oversight produces plausible but often inaccurate content. Editorial oversight without AI drafting produces slower output without meaningful quality advantage. The combination, done well, produces content faster while maintaining rigor.
Stage 4: Human Editorial Review
Every AI-drafted piece passes through human editorial review before publication. The review addresses:
- Claim accuracy. Does every factual claim match its cited source?
- Source attribution. Are the citations specific enough to verify?
- Voice and style. Does the piece match Nex brand voice (clear, confident-mystical, not woo-woo)?
- Structural quality. TL;DR at top, H2-H4 hierarchy, 7 FAQs, related questions, CTAs, internal links.
- Schema and SEO. Article + FAQPage + BreadcrumbList schema. Meta title and description within character limits. OG and Twitter Cards.
- Editorial standards compliance. No em dashes, no emojis, no overselling, no unsupported medical or psychological claims.
Pieces that fail review are returned to drafting with specific notes rather than published with caveats. The bar is publication quality or nothing.
Stage 5: Citation Verification
Before publication, every external citation is verified. The cited work exists. The page number (where relevant) is accurate. The quote or claim attributed to the source actually appears in the source. This stage catches errors that both AI drafting and human review sometimes miss, particularly when citations have been passed around across the internet long enough that misattributions have become common.
If a citation cannot be verified, it is either corrected (to the actual source) or removed (and the claim removed or rephrased without attribution).
Want to see the specific sources behind any page? Visit our Sources Library - a topic-organized bibliography of every primary source we reference across the 116+ pages on Nex Tools.
How We Classify and Cite Sources
Not all sources are equal. We use a three-tier source hierarchy adapted from academic research practice:
Tier 1: Primary Sources
The original developer of a concept, the original researcher who performed a study, or the working practitioner who has published extensively on a specific practice. Examples: Carl Jung for synchronicity and shadow work; Ra Uru Hu for Human Design; Dr. Gail Matthews for goal-setting research; Dr. Robert Emmons for gratitude research; Steven Forrest for evolutionary astrology. Tier 1 is our default citation level.
Tier 2: Secondary Scholarship
Reputable analysts, academics, and experienced practitioners who interpret and extend primary sources. Examples: Liz Greene on Saturn, Robert Johnson on Jungian shadow work, Chetan Parkyn on Human Design. Tier 2 is used when it adds meaningful interpretation beyond primary source, or when the primary source is less accessible.
Tier 3: General Reference
Widely available reference materials - encyclopedias, reputable news outlets reporting on primary research, well-established publications in the field. Used sparingly and typically as accessibility points rather than as primary evidence. Wikipedia is not used as a source, though we may reference it as an accessibility link for topics our readers may want to explore further.
Sources We Do Not Use
- Other spiritual content sites - these often repeat secondary information without attribution. Citing them would propagate errors.
- Social media claims - TikTok spiritual content, Instagram angel number accounts, and similar ephemeral sources are not citable.
- AI-generated content from other sites - we do not cite other AI-drafted content as authoritative.
- Unverified folk wisdom - "ancient tradition" is not a citation. If a claim comes from a specific tradition, we cite a specific text or teacher within that tradition.
Our AI Disclosure
We use AI (Claude Opus, developed by Anthropic) as a drafting tool under human editorial oversight. Specifically:
- AI produces initial drafts. The AI receives the research, outline, voice guidelines, and formatting requirements, then writes a first draft that humans then review and revise.
- Humans direct the research. We choose the topics, identify the sources, and decide what to include or exclude. AI does not make these choices.
- Humans verify the output. Every AI-drafted piece is reviewed by a human editor before publication. Claims are checked against cited sources.
- Humans make editorial judgments. When a tradeoff exists (brevity vs detail, claim vs caveat), a human makes the call.
- Updates are human-driven. When a page needs to be updated based on new research or user feedback, the human editor initiates and reviews the update.
Our position on AI disclosure in the industry: many content operations use AI without disclosure, which we consider dishonest to readers. Many others dismiss AI entirely, which we consider a missed efficiency opportunity. Transparent disclosure of the hybrid workflow is the honest middle path. If the model matters to you as a reader, knowing our workflow lets you weight the content accordingly. If it does not matter to you, the disclosure costs nothing.
Update and Correction Policy
Content changes over time as we learn more. Our update protocol:
Scheduled Updates
All core content is reviewed on a rolling 12-month cycle. During review, we check for accuracy, outdated information, broken links, and new sources that should be integrated. The "Updated" date stamp on each page reflects the most recent review. Pages not updated within 18 months of original publication should be read with that in mind.
Correction Policy
When a factual error is identified, we correct the page and update the date stamp. For substantive corrections (where the meaning of a claim changes), we add a brief note at the relevant section indicating the correction. For typos and minor clarifications, we correct silently. We do not remove historical versions from internal records, though we do not maintain public version histories.
Readers who identify factual errors can report them through our contact form. All reports receive a response within 7 days, though not all reports result in changes (we evaluate each on the merits).
Deletion Policy
We rarely delete content. Pages that become obsolete are updated with current information rather than removed. Pages that turn out to be incorrect at a foundational level are rewritten under the same URL rather than deleted. URL stability is an editorial value.
Our editorial standards define what we will and will not publish. Read the Editorial Standards page - fact-checking process, correction policy, conflict of interest disclosure, and AI policy.
What We Will Not Claim
Our editorial scope has firm boundaries. We will not:
- Make medical claims. Spiritual practices we cover (meditation, breathwork, frequency listening) may have supportive effects, but we do not claim they diagnose, treat, cure, or prevent any illness. Content related to wellness is framed as complementary practice, not medical intervention.
- Make financial claims. Spiritual practices we cover do not guarantee financial outcomes. We do not publish content that frames spirituality as a get-rich mechanism.
- Make psychological treatment claims. Shadow work, Kundalini, trauma-adjacent topics are covered descriptively. We consistently refer readers to licensed mental health professionals for clinical work.
- Guarantee spiritual outcomes. We describe what traditions claim and what practitioners report, without guaranteeing that any reader will experience the same.
- Dismiss all skepticism. When the scientific evidence for a practice is limited, we say so. Our skeptic-friendly content is explicit about what research supports and what remains unverified.
Transparency Commitments
We commit to the following transparency practices:
- Cite every substantive claim. If a page says something specific, the source is named.
- Disclose AI use. Every content page is produced under our hybrid AI-plus-human workflow, and this page describes that workflow in detail.
- Disclose affiliate relationships. When we link to affiliate products, the relationship is disclosed on the linking page.
- Publish our editorial team. Our author page describes who is editorially responsible for the content.
- Publish our sources. Our Sources Library lists every primary source we reference across the site.
- Disclose corrections. Substantive corrections are marked on the page.
- Maintain stable URLs. We do not delete content or break URLs except for legal reasons.
Why This Matters for Our Content Domain
Much of the spiritual and metaphysical content online has no verification, no citations, no correction policy, and no editorial discipline. This has produced an ecosystem where claims are repeated without sourcing, misattributions propagate, and readers have no way to tell credible content from invention. Nex Tools chose to operate differently.
Our readers include practitioners doing serious work with these frameworks, people in vulnerable life moments seeking guidance, and skeptics evaluating whether any of it is worth engaging with. Each of these audiences deserves editorial rigor. The skeptic deserves to know when a claim is belief-based versus research-supported. The practitioner deserves accurate attribution so they can trace concepts to sources. The person in crisis deserves to not encounter medical claims framed as spiritual truth.
Editorial methodology is how we meet these needs simultaneously. Without it, we would produce content that is either too cautious for practitioners or too expansive for skeptics or too general for people seeking specific guidance. With it, we can write content that serves all three by being clear about what kind of claim is being made and what the basis for it is.
Want to understand who is editorially responsible for Nex Tools content? Meet the Nex Editorial Team - the team that oversees the research, drafting, and verification process.
How Our Methodology Compares to Industry Standards
| Practice | Typical Spiritual Content | Nex Tools |
|---|---|---|
| Primary source citations | Rare | Standard for substantive claims |
| AI disclosure | Usually absent | Explicit on methodology page |
| Update dates | Often missing or misleading | Reflects most recent review |
| Correction policy | Not documented | Documented + reader reporting |
| Conflict of interest disclosure | Rare | Disclosed at point of link |
| Editorial boundaries | Often none | Documented on this page |
Ongoing Methodology Development
This methodology is not fixed. As the AI content landscape evolves, as our reader base grows, and as our own editorial team learns, we update the process. Major methodology changes are documented with date stamps on this page. Readers who want to see how our approach has evolved can see the edit history here.
Areas we are actively developing:
- Expanded peer review. For topics with significant clinical relevance (shadow work, Kundalini, trauma-adjacent material), we are building a network of licensed practitioners who review those specific pages.
- Deeper source transparency. We are moving toward per-page source disclosures in addition to the central Sources Library.
- Reader feedback integration. We are building systematic pathways for reader corrections to flow into our update cycle.
Contact and Feedback
If you identify a factual error, have a methodology question, or want to suggest a source we should consider, please reach out through our contact page. All reports receive a human response. Specific corrections typically result in page updates within 14 days.
Explore Our Transparency Stack
Our full transparency stack includes Editorial Standards, the Author page, and the Sources Library. Each adds detail to a different facet of our editorial commitments.
Related Transparency Documents
- About Nex Tools - mission, vision, and team overview.
- Editorial Standards - fact-checking, corrections, AI policy detail.
- Nex Editorial Team - the people behind the content.
- Sources Library - comprehensive bibliography.