How I Use ChatGPT to Run a Company: Context, Custom Data, Automation and Agents

Every day I use ChatGPT and related tools to get better at my job, solve problems faster and scale decisions across teams. If you’re a knowledge worker—or even a sommelier—there is an enormous advantage in treating modern LLMs as an always-on, highly capable intern with a PhD. This guide lays out the exact concepts I rely on, step-by-step practical approaches, and a few extra considerations most people miss when they try to “make ChatGPT smarter.”

Outline

  • Understanding the context window
  • Making ChatGPT aware of your private data: embeddings + RAG
  • Tool calling: giving the model real-time access to external systems
  • From generation to automation to orchestration (agents)
  • Practical daily workflows and examples
  • Risks, limitations and how to mitigate them
  • Onboarding and managing digital teammates
  • Strategic implications and what to build next
  • Recommended reading and next steps

1. The context window: why it’s the first thing to understand

Large language models don’t “know” everything about you or your company unless you give them the relevant information in the context window. Think of the context window as a sheet of paper that must contain both your prompt and the model’s response. If the data you need won’t fit on that sheet, the model can’t properly reason about it.

Key takeaways:

  • The context window limits what the model can attend to at once. Frontier models now support tens to hundreds of thousands of tokens, but long documents (or whole books) may still not fit.
  • Even if public web pages were included during training, models have a cutoff date; they don’t know updates made after training unless you provide them in-context or via live tools.
  • Always assume the model doesn’t have your private data unless you explicitly surface it.

2. Vector embeddings + Retrieval Augmented Generation (RAG): giving the model your documents

Instead of trying to retrain the model on all your private documents (impractical and often unnecessary), use embeddings and a vector database to perform semantic search. The flow looks like this:

  1. Convert each document (email, SOP, book chapter, spreadsheet) into a vector embedding.
  2. Store those embeddings in a vector store (semantic database).
  3. When you ask a question, query the vector store for the top-N semantically relevant docs.
  4. Insert those documents into the model’s context window and ask the model to answer.

This hack is the real unlock: you don’t retrain the LLM—you feed it just the handful of documents it needs to answer a specific question. It’s like giving a PhD intern access to the right files before they answer.

Technical notes:

  • Embeddings live in high-dimensional space; more dimensions usually capture nuance better, but cost and compute trade-offs exist.
  • Choosing top-N (often 3–7 docs) balances relevance vs context-window space.
  • Chunk long documents sensibly (by sections, chapters or logical blocks) so the vector store can return the most useful chunks.

3. Tool calling: giving the model live capabilities

Tool calling is effectively “pretend you have these tools” plus an orchestrator to execute them. You tell the model in its prompt that it can use certain tools (a browser, stock lookup, calculator, calendar API) and then, when the model asks to use one, your application executes the call and returns the results into the context window.

Why this matters:

  • It gives LLMs live access to up-to-date information without retraining.
  • It expands the intern’s capability by orders of magnitude—search, calculation, database access, email retrieval, actions like sending messages or creating tickets.
  • Tools can be custom-built for your company: payroll lookup, CRM queries, contract search, or any API you expose.

How tool calling usually works (simplified)

  1. Your application loads the LLM with instructions that certain tools are “available.”
  2. The LLM returns a structured request like “call tool X with these params.”
  3. Your app executes tool X, gathers the result, and returns it into the context window.
  4. The LLM continues the conversation with the new, real-time information.

4. From generation to automation to orchestration: the AI maturity curve

There are four practical ways to use LLMs:

  • Generate: write blog posts, images, first drafts.
  • Synthesize: summarize and extract insights from large documents.
  • Automate: have the model perform routine workflows (e.g., evaluate domain names, assemble candidate lists, draft and send templated emails).
  • Orchestrate: run and manage sets of agents that coordinate to achieve high-level goals.

Automation is already impactful today. Orchestration—delegating goals to networked agents that use tools, query databases and coordinate—is the next frontier.

5. Practical workflows I use daily

I use tailored GPTs and RAG-powered assistants 10–20 times a day. Examples you can implement:

  • Personal executive coach: upload goals, financials and preferences; ask “How should I respond to this email?” or “What’s the right decision given my goals?”
  • Email Q&A: index your sent and received emails into a vector store and ask historical questions, timelines or rationale behind past decisions.
  • Domain procurement automation: give the agent a concept, it brainstorms names, checks availability, fetches marketplace prices and flags bargains vs overpriced listings.
  • Product research: upload specs, competitor pages and customer feedback; ask for prioritised feature trade-offs.
  • Creative prototyping: outline a fictional world or game rules (2,000-word prompt) and use role-play to iterate and pressure-test narratives and mechanics.

6. Limitations, risks and how to mitigate them

LLMs are powerful but imperfect. Here’s what to watch out for—and practical mitigations.

Hallucinations (made-up facts)

LLMs can invent plausible-sounding but false information. They aren’t “lying”—they’re producing outputs conditioned on patterns they learned.

  • Mitigation: always validate critical outputs with authoritative sources or tools (tool calling to fetch facts, database queries, human review).
  • Mitigation: add citations and retrieval provenance to any business-facing answer.

Training cut-off and stale knowledge

Models were trained up to a specific date. Anything that changed after that is unknown unless surfaced via tools or RAG.

  • Mitigation: combine RAG and tool-calls for dynamic data (news, stock prices, internal dashboards).

Security, access control and privacy

Giving an external vendor access to your email, Slack or files requires trust—and fine-grained levels of access often don’t exist.

  • Mitigation: prefer building internal connectors or use vetted enterprise offerings with clear scopes and audit logs.
  • Mitigation: restrict which datasets are exposed to which agents; use token-scoped access and rotate credentials.
  • Mitigation: for sensitive domains (healthcare, legal, finance), enforce human-in-the-loop verification and compliance checks.

Operational and cost concerns

Embedding generation, vector search, and repeated LLM calls incur costs and latency.

  • Mitigation: cache embeddings and retrievals, use cheaper or smaller models for non-critical tasks, and batch queries.
  • Mitigation: monitor usage and implement guardrails to prevent runaway costs.

Regulatory and ethical concerns

Consider GDPR, HIPAA and other compliance frameworks when exposing personal data to models or third parties.

  • Mitigation: anonymise or pseudonymise data where possible, maintain data processing records, and consult legal counsel for regulated use-cases.

7. Onboarding and managing digital teammates

Digital agents are not magic. Treat them like hires: you must onboard, coach and measure them.

  • Onboarding: create a knowledge base (SOPs, tone guidelines, objectives) that the agent can reference.
  • Training loop: gather examples of good vs bad outputs and use those to refine prompts, tool usage and retrieval strategies.
  • Performance reviews: define KPIs for agents (accuracy, time saved, error rate) and iterate.
  • New roles: expect roles like “agent manager” or “AI orchestration lead” to appear—people who compose agent workflows, manage credentials and triage failures.

8. Strategic implications and product/playbook ideas

Thinking strategically about AI unlocks new product directions and company plays:

  • Ship helper agents tailored to roles (sales assistant, HR onboarding agent, legal contract summariser).
  • Build onboarding and agent-management software: training libraries, versioning, and agent marketplaces.
  • For founders: build around problems AI currently can’t solve well but will likely improve quickly. That gives you time to acquire customers and iterate while the models evolve.
  • For investors: expect talent-centric plays where firms compete aggressively for engineers and researchers. Talent acquisition strategies will reshape the landscape.

9. Extra practical things most people miss

  • Evaluation frameworks: build tests and unit-like evaluations for model outputs (accuracy checks, hallucination frequency, bias audits).
  • Fallbacks and human-in-the-loop: design explicit fallback paths when confidence is low—escalate to a human reviewer or another tool.
  • Prompt versioning: keep prompt templates in version control and A/B test prompt changes like product experiments.
  • Data lifecycle: track what you store in embeddings and for how long; rotate or delete old vectors when privacy requires it.
  • Hybrid retrieval: combine keyword, semantic and metadata filters for precise retrieval (e.g., time range, author, doc type).
  • Model selection: use smaller models for lower-risk tasks to save costs, reserve larger models for reasoning-heavy flows.

10. Practical step-by-step: build a smarter company GPT

  1. Identify the use-case (email assistant, product research, onboarding helper).
  2. Collect relevant documents and break them into sensible chunks.
  3. Generate embeddings for each chunk and store them in a vector database.
  4. Create a retrieval pipeline that fetches top-N relevant chunks for any query.
  5. Design prompts that combine user query + retrieved context + tool capabilities.
  6. Implement tool-calling for live data (internal APIs, calendar, CRM, web search).
  7. Run initial human-in-the-loop evaluations and iterate prompt/templates.
  8. Put governance in place: logging, auditing, access control, cost monitoring.

11. Use-cases that scale immediately

  • Internal knowledge Q&A (index SOPs, FAQs, internal comms).
  • Customer support summarisation and suggested responses.
  • Deal intelligence: compile timeline and rationale for major past decisions from email and meeting notes.
  • Marketing copy first drafts and creative brainstorming with guardrails.
  • Product discovery: synthesize user feedback into themes and prioritised suggestions.

12. Jobs, creativity and the future

AI is an amplifier: it multiplies the value of the people who use it. It will cause job displacement in certain tasks, but it will also create new roles and unlock creative expression for people who previously lacked the skills to realise their ideas. The right mental model is “you to the power of AI,” not “AI versus you.”

That said, expect interim pain. Some roles will be substantially changed or eliminated. The responsible approach is to prepare—reskill, create governance and design safety nets for those affected.

13. Competition for talent

As AI becomes strategic, expect aggressive talent acquisition: extraordinary offers and rapid poaching of researchers and engineers. The marketplace values the ability to anticipate and execute on future capabilities; firms that can bring experience and pattern recognition to the problem will be extremely attractive.

14. Recommended people and sources to follow

  • Follow leading ML researchers who explain ideas clearly and build in public—search for contributors who demystify model mechanics and evaluation methods.
  • Read practical product and business takes from seasoned software leaders who connect AI to go-to-market and operational changes.
  • Consume deep-dive explainers and engineering write-ups on embeddings, RAG, tool-calling and agent orchestration.

Conclusion — one practical habit to start today

Here’s the single best habit you can adopt immediately: before you sit down to work on any knowledge task, ask an AI assistant to take a first pass. Pretend you have access to an intern with a PhD in everything—describe the task, give a little context, and see what it produces. More often than not you’ll be surprised, and even when the output is imperfect it will surface ideas and shortcuts you wouldn’t have discovered on your own.

Use AI every day. Treat it like a teammate, train it with your documents, expose the right tools, and govern it responsibly. The leverage is real—so start small, measure, and scale.

Similar Posts