Documentation Index
Fetch the complete documentation index at: https://docs.medlistiq.com/llms.txt
Use this file to discover all available pages before exploring further.
POST /v1/med-lists/from-documents accepts one or more clinical PDFs for a
single patient and returns the same kind of deduplicated medication list you’d
get from /v1/medications/infer — but driven by the document text instead of
structured FHIR.
Use it when your input is what the patient or referring clinic actually sends:
a faxed referral packet, an Epic-rendered discharge summary, a printed
After-Visit Summary, or a stack of progress notes. We pull the medication
mentions from each document, reconcile them across the packet, and return one
entry per drug with the page / section / text-snippet evidence it was inferred
from.
Try it in the playground
Pre-loaded sample PDFs you can post in one click. Swap “Source data” to
“PDFs” once you’re in.
API key
Same Bearer key that powers
/v1/medications/infer. No separate setup.Quickstart
Get an API key
Same flow as the FHIR endpoint — see Authentication.
POST your PDFs as multipart/form-data
Each file goes under the
files form field. Repeat files=@… for every
PDF in the packet.How dedup works
A medication mentioned in three documents becomes one entry inmedications with three items in sources[]. Cross-document reconciliation
matches by canonical drug name (brand → generic where applicable), so:
Lipitorin one doc andAtorvastatinin another → oneAtorvastatinentry- The same med in 5 of 8 progress notes → one entry, 5 sources
- Differing doses across docs → the reconciler picks the one from the most authoritative section (Discharge > Active/Current > Home/Outpatient > MAR > others)
status_conflict: true flags meds whose source documents disagreed on
state (e.g. one says active, another says stopped). MVP surfaces the conflict
but does not auto-resolve it — clients should review manually.
Response shape
The body contains onlymedications. Cross-cutting metadata lives in
response headers:
| Header | Meaning |
|---|---|
x-request-id | UUID for support tickets / log lookup |
x-document-count | Number of PDFs in the request |
x-total-page-count | Total pages across all PDFs after parsing |
x-output-medication-count | len(medications) — same as the body field |
x-processing-time-ms | End-to-end latency on our side |
x-ruleset-version | Versioned tag of the extraction logic that ran |
medications[] is an ExtractedMedication:
| Field | Type | Notes |
|---|---|---|
display | string | Human-readable composite (drug + dose + route + frequency). |
drug_name | string | Canonical name; brand → generic when a mapping exists. Use as the dedup key on your side. |
rxnorm_code | string | null | RxCUI from RxNorm. null when no match. |
rxnorm_system | string | null | Constant URI; present iff rxnorm_code is. |
dose | string | null | Dose as parsed (e.g. "500 mg"). |
route | string | null | Normalized route abbreviation (PO, IV, IM, SC, INH, TOP, …). |
frequency | string | null | Normalized frequency (BID, TID, QHS, PRN, Q6H, …). |
status | string | "active", "stopped", or "unknown". Driven by the section heading the canonical mention was found under. |
status_conflict | bool | true when sources disagreed. |
sources[] | array | One item per detected mention. See below. |
sources[] item:
| Field | Type | Notes |
|---|---|---|
document_name | string | Filename you submitted (discharge_summary.pdf). |
page | int | 1-indexed page within that PDF. |
section | string | null | Detected section heading ("Discharge Medications", "Active Medications", …) or null if none was identified. |
evidence_text | string | The raw line of text where the mention was matched. |
medications[]: by section recency of each med’s canonical mention
(Discharge > Active/Current > …), then alphabetically by drug_name.
Limits
| Limit | Value | What happens past it |
|---|---|---|
| Files per request | 15 | 422 validation error |
| Bytes per file | 50 MB | 422 |
| Total bytes per request | 150 MB | 422 |
| Total pages per request | 200 | 422 (during processing) |
Errors
Same error envelope and retry semantics as the rest of/v1/* — see
Errors. Endpoint-specific 422 details:
| Detail | Cause |
|---|---|
at least one file is required | Empty multipart body — no files field at all. |
every file must have a filename | A files part was uploaded with no filename. |
too many files: max 15 per request (received N) | More than 15 PDFs. |
file 'X' must be application/pdf (got image/png) | Wrong Content-Type on a files part. |
file 'X' does not look like a PDF (missing %PDF- header) | Bytes don’t start with %PDF-. Catches renamed .txt/.docx. |
file 'X' is N bytes; max per file is 52428800 bytes | Single file over 50 MB. |
total upload size exceeds 157286400 bytes across all files | Combined upload over 150 MB. |
payload exceeds 200-page synchronous limit; async mode coming later | Total page count too large. |
503 here generally means the upstream OCR service was briefly unavailable
or has not been configured. Retry with backoff.
Differences from /v1/medications/infer
/v1/medications/infer | /v1/med-lists/from-documents | |
|---|---|---|
| Input | FHIR resources (JSON body) | PDF files (multipart) |
| Status vocabulary | active / completed / stopped / cancelled / unknown | active / stopped / unknown |
| Confidence score | Per-medication float | Not yet — confidence work in progress |
| Provenance | provenance[med_id] keyed by med ID, with rule-trace evidence | sources[] per medication, with page + section + text-snippet evidence |
| Verbosity / format selectors | verbosity × format | None — single response shape |
| Latency | ~200 ms typical | Several seconds (page-bound) |
| Rate limit cost | 1 unit | Higher per-page; see plan limits |