Pydantic Company
We've started a company based on the principles that I believe have led to Pydantic's success.
Learning more from the Company Announcement.
Schema
Pydantic allows auto creation of JSON Schemas from models:
from enum import Enum
from pydantic import BaseModel, Field
class FooBar(BaseModel):
count: int
size: float = None
class Gender(str, Enum):
male = 'male'
female = 'female'
other = 'other'
not_given = 'not_given'
class MainModel(BaseModel):
"""
This is the description of the main model
"""
foo_bar: FooBar = Field(...)
gender: Gender = Field(None, alias='Gender')
snap: int = Field(
42,
title='The Snap',
description='this is the value of snap',
gt=30,
lt=50,
)
class Config:
title = 'Main'
# this is equivalent to json.dumps(MainModel.schema(), indent=2):
print(MainModel.schema_json(indent=2))
(This script is complete, it should run "as is")
Outputs:
{
"title": "Main",
"description": "This is the description of the main model",
"type": "object",
"properties": {
"foo_bar": {
"$ref": "#/definitions/FooBar"
},
"Gender": {
"$ref": "#/definitions/Gender"
},
"snap": {
"title": "The Snap",
"description": "this is the value of snap",
"default": 42,
"exclusiveMinimum": 30,
"exclusiveMaximum": 50,
"type": "integer"
}
},
"required": [
"foo_bar"
],
"definitions": {
"FooBar": {
"title": "FooBar",
"type": "object",
"properties": {
"count": {
"title": "Count",
"type": "integer"
},
"size": {
"title": "Size",
"type": "number"
}
},
"required": [
"count"
]
},
"Gender": {
"title": "Gender",
"description": "An enumeration.",
"enum": [
"male",
"female",
"other",
"not_given"
],
"type": "string"
}
}
}
The generated schemas are compliant with the specifications: JSON Schema Core, JSON Schema Validation and OpenAPI.
BaseModel.schema will return a dict of the schema, while BaseModel.schema_json will return a JSON string
representation of that dict.
Sub-models used are added to the definitions JSON attribute and referenced, as per the spec.
All sub-models' (and their sub-models') schemas are put directly in a top-level definitions JSON key for easy re-use
and reference.
"Sub-models" with modifications (via the Field class) like a custom title, description or default value,
are recursively included instead of referenced.
The description for models is taken from either the docstring of the class or the argument description to
the Field class.
The schema is generated by default using aliases as keys, but it can be generated using model
property names instead by calling MainModel.schema/schema_json(by_alias=False).
The format of $refs ("#/definitions/FooBar" above) can be altered by calling schema() or schema_json()
with the ref_template keyword argument, e.g. ApplePie.schema(ref_template='/schemas/{model}.json#/'), here {model}
will be replaced with the model naming using str.format().
Getting schema of a specified type¶
Pydantic includes two standalone utility functions schema_of and schema_json_of that can be used to
apply the schema generation logic used for pydantic models in a more ad-hoc way.
These functions behave similarly to BaseModel.schema and BaseModel.schema_json,
but work with arbitrary pydantic-compatible types.
from typing import Literal, Union
from typing_extensions import Annotated
from pydantic import BaseModel, Field, schema_json_of
class Cat(BaseModel):
pet_type: Literal['cat']
cat_name: str
class Dog(BaseModel):
pet_type: Literal['dog']
dog_name: str
Pet = Annotated[Union[Cat, Dog], Field(discriminator='pet_type')]
print(schema_json_of(Pet, title='The Pet Schema', indent=2))
"""
{
"title": "The Pet Schema",
"discriminator": {
"propertyName": "pet_type",
"mapping": {
"cat": "#/definitions/Cat",
"dog": "#/definitions/Dog"
}
},
"oneOf": [
{
"$ref": "#/definitions/Cat"
},
{
"$ref": "#/definitions/Dog"
}
],
"definitions": {
"Cat": {
"title": "Cat",
"type": "object",
"properties": {
"pet_type": {
"title": "Pet Type",
"enum": [
"cat"
],
"type": "string"
},
"cat_name": {
"title": "Cat Name",
"type": "string"
}
},
"required": [
"pet_type",
"cat_name"
]
},
"Dog": {
"title": "Dog",
"type": "object",
"properties": {
"pet_type": {
"title": "Pet Type",
"enum": [
"dog"
],
"type": "string"
},
"dog_name": {
"title": "Dog Name",
"type": "string"
}
},
"required": [
"pet_type",
"dog_name"
]
}
}
}
"""
from typing import Literal, Union
from typing import Annotated
from pydantic import BaseModel, Field, schema_json_of
class Cat(BaseModel):
pet_type: Literal['cat']
cat_name: str
class Dog(BaseModel):
pet_type: Literal['dog']
dog_name: str
Pet = Annotated[Union[Cat, Dog], Field(discriminator='pet_type')]
print(schema_json_of(Pet, title='The Pet Schema', indent=2))
"""
{
"title": "The Pet Schema",
"discriminator": {
"propertyName": "pet_type",
"mapping": {
"cat": "#/definitions/Cat",
"dog": "#/definitions/Dog"
}
},
"oneOf": [
{
"$ref": "#/definitions/Cat"
},
{
"$ref": "#/definitions/Dog"
}
],
"definitions": {
"Cat": {
"title": "Cat",
"type": "object",
"properties": {
"pet_type": {
"title": "Pet Type",
"enum": [
"cat"
],
"type": "string"
},
"cat_name": {
"title": "Cat Name",
"type": "string"
}
},
"required": [
"pet_type",
"cat_name"
]
},
"Dog": {
"title": "Dog",
"type": "object",
"properties": {
"pet_type": {
"title": "Pet Type",
"enum": [
"dog"
],
"type": "string"
},
"dog_name": {
"title": "Dog Name",
"type": "string"
}
},
"required": [
"pet_type",
"dog_name"
]
}
}
}
"""
(This script is complete, it should run "as is")
Field customization¶
Optionally, the Field function can be used to provide extra information about the field and validations.
It has the following arguments:
default: (a positional argument) the default value of the field. Since theFieldreplaces the field's default, this first argument can be used to set the default. Use ellipsis (...) to indicate the field is required.default_factory: a zero-argument callable that will be called when a default value is needed for this field. Among other purposes, this can be used to set dynamic default values. It is forbidden to set bothdefaultanddefault_factory.alias: the public name of the fieldtitle: if omitted,field_name.title()is useddescription: if omitted and the annotation is a sub-model, the docstring of the sub-model will be usedexclude: exclude this field when dumping (.dictand.json) the instance. The exact syntax and configuration options are described in details in the exporting models section.include: include (only) this field when dumping (.dictand.json) the instance. The exact syntax and configuration options are described in details in the exporting models section.const: this argument must be the same as the field's default value if present.gt: for numeric values (int,float,Decimal), adds a validation of "greater than" and an annotation ofexclusiveMinimumto the JSON Schemage: for numeric values, this adds a validation of "greater than or equal" and an annotation ofminimumto the JSON Schemalt: for numeric values, this adds a validation of "less than" and an annotation ofexclusiveMaximumto the JSON Schemale: for numeric values, this adds a validation of "less than or equal" and an annotation ofmaximumto the JSON Schemamultiple_of: for numeric values, this adds a validation of "a multiple of" and an annotation ofmultipleOfto the JSON Schemamax_digits: forDecimalvalues, this adds a validation to have a maximum number of digits within the decimal. It does not include a zero before the decimal point or trailing decimal zeroes.decimal_places: forDecimalvalues, this adds a validation to have at most a number of decimal places allowed. It does not include trailing decimal zeroes.min_items: for list values, this adds a corresponding validation and an annotation ofminItemsto the JSON Schemamax_items: for list values, this adds a corresponding validation and an annotation ofmaxItemsto the JSON Schemaunique_items: for list values, this adds a corresponding validation and an annotation ofuniqueItemsto the JSON Schemamin_length: for string values, this adds a corresponding validation and an annotation ofminLengthto the JSON Schemamax_length: for string values, this adds a corresponding validation and an annotation ofmaxLengthto the JSON Schemaallow_mutation: a boolean which defaults toTrue. When False, the field raises aTypeErrorif the field is assigned on an instance. The model config must setvalidate_assignmenttoTruefor this check to be performed.-
regex: for string values, this adds a Regular Expression validation generated from the passed string and an annotation ofpatternto the JSON SchemaNote
pydantic validates strings using
re.match, which treats regular expressions as implicitly anchored at the beginning. On the contrary, JSON Schema validators treat thepatternkeyword as implicitly unanchored, more like whatre.searchdoes.For interoperability, depending on your desired behavior, either explicitly anchor your regular expressions with
^(e.g.^footo match any string starting withfoo), or explicitly allow an arbitrary prefix with.*?(e.g..*?footo match any string containing the substringfoo).See #1631 for a discussion of possible changes to pydantic behavior in v2.
repr: a boolean which defaults toTrue. When False, the field shall be hidden from the object representation.**any other keyword arguments (e.g.examples) will be added verbatim to the field's schema
Instead of using Field, the fields property of the Config class can be used
to set all of the arguments above except default.
Unenforced Field constraints¶
If pydantic finds constraints which are not being enforced, an error will be raised. If you want to force the
constraint to appear in the schema, even though it's not being checked upon parsing, you can use variadic arguments
to Field() with the raw schema attribute name:
from pydantic import BaseModel, Field, PositiveInt
try:
# this won't work since PositiveInt takes precedence over the
# constraints defined in Field meaning they're ignored
class Model(BaseModel):
foo: PositiveInt = Field(..., lt=10)
except ValueError as e:
print(e)
"""
On field "foo" the following field constraints are set but not enforced:
lt.
For more details see https://docs.pydantic.dev/usage/schema/#unenforced-
field-constraints
"""
# but you can set the schema attribute directly:
# (Note: here exclusiveMaximum will not be enforce)
class Model(BaseModel):
foo: PositiveInt = Field(..., exclusiveMaximum=10)
print(Model.schema())
"""
{
'title': 'Model',
'type': 'object',
'properties': {
'foo': {
'title': 'Foo',
'exclusiveMaximum': 10,
'exclusiveMinimum': 0,
'type': 'integer',
},
},
'required': ['foo'],
}
"""
# if you find yourself needing this, an alternative is to declare
# the constraints in Field (or you could use conint())
# here both constraints will be enforced:
class Model(BaseModel):
# Here both constraints will be applied and the schema
# will be generated correctly
foo: int = Field(..., gt=0, lt=10)
print(Model.schema())
"""
{
'title': 'Model',
'type': 'object',
'properties': {
'foo': {
'title': 'Foo',
'exclusiveMinimum': 0,
'exclusiveMaximum': 10,
'type': 'integer',
},
},
'required': ['foo'],
}
"""
(This script is complete, it should run "as is")
typing.Annotated Fields¶
Rather than assigning a Field value, it can be specified in the type hint with typing.Annotated:
from uuid import uuid4
from pydantic import BaseModel, Field
from typing_extensions import Annotated
class Foo(BaseModel):
id: Annotated[str, Field(default_factory=lambda: uuid4().hex)]
name: Annotated[str, Field(max_length=256)] = 'Bar'
from uuid import uuid4
from pydantic import BaseModel, Field
from typing import Annotated
class Foo(BaseModel):
id: Annotated[str, Field(default_factory=lambda: uuid4().hex)]
name: Annotated[str, Field(max_length=256)] = 'Bar'
(This script is complete, it should run "as is")
Field can only be supplied once per field - an error will be raised if used in Annotated and as the assigned value.
Defaults can be set outside Annotated as the assigned value or with Field.default_factory inside Annotated - the
Field.default argument is not supported inside Annotated.
For versions of Python prior to 3.9, typing_extensions.Annotated can be used.
Modifying schema in custom fields¶
Custom field types can customise the schema generated for them using the __modify_schema__ class method;
see Custom Data Types for more details.
__modify_schema__ can also take a field argument which will have type Optional[ModelField].
pydantic will inspect the signature of __modify_schema__ to determine whether the field argument should be
included.
from typing import Any, Callable, Dict, Generator, Optional
from pydantic import BaseModel, Field
from pydantic.fields import ModelField
class RestrictedAlphabetStr(str):
@classmethod
def __get_validators__(cls) -> Generator[Callable, None, None]:
yield cls.validate
@classmethod
def validate(cls, value: str, field: ModelField):
alphabet = field.field_info.extra['alphabet']
if any(c not in alphabet for c in value):
raise ValueError(f'{value!r} is not restricted to {alphabet!r}')
return cls(value)
@classmethod
def __modify_schema__(
cls, field_schema: Dict[str, Any], field: Optional[ModelField]
):
if field:
alphabet = field.field_info.extra['alphabet']
field_schema['examples'] = [c * 3 for c in alphabet]
class MyModel(BaseModel):
value: RestrictedAlphabetStr = Field(alphabet='ABC')
print(MyModel.schema_json(indent=2))
from typing import Any, Callable, Optional
from collections.abc import Generator
from pydantic import BaseModel, Field
from pydantic.fields import ModelField
class RestrictedAlphabetStr(str):
@classmethod
def __get_validators__(cls) -> Generator[Callable, None, None]:
yield cls.validate
@classmethod
def validate(cls, value: str, field: ModelField):
alphabet = field.field_info.extra['alphabet']
if any(c not in alphabet for c in value):
raise ValueError(f'{value!r} is not restricted to {alphabet!r}')
return cls(value)
@classmethod
def __modify_schema__(
cls, field_schema: dict[str, Any], field: Optional[ModelField]
):
if field:
alphabet = field.field_info.extra['alphabet']
field_schema['examples'] = [c * 3 for c in alphabet]
class MyModel(BaseModel):
value: RestrictedAlphabetStr = Field(alphabet='ABC')
print(MyModel.schema_json(indent=2))
from typing import Any
from collections.abc import Callable, Generator
from pydantic import BaseModel, Field
from pydantic.fields import ModelField
class RestrictedAlphabetStr(str):
@classmethod
def __get_validators__(cls) -> Generator[Callable, None, None]:
yield cls.validate
@classmethod
def validate(cls, value: str, field: ModelField):
alphabet = field.field_info.extra['alphabet']
if any(c not in alphabet for c in value):
raise ValueError(f'{value!r} is not restricted to {alphabet!r}')
return cls(value)
@classmethod
def __modify_schema__(
cls, field_schema: dict[str, Any], field: ModelField | None
):
if field:
alphabet = field.field_info.extra['alphabet']
field_schema['examples'] = [c * 3 for c in alphabet]
class MyModel(BaseModel):
value: RestrictedAlphabetStr = Field(alphabet='ABC')
print(MyModel.schema_json(indent=2))
(This script is complete, it should run "as is")
Outputs:
{
"title": "MyModel",
"type": "object",
"properties": {
"value": {
"title": "Value",
"alphabet": "ABC",
"examples": [
"AAA",
"BBB",
"CCC"
],
"type": "string"
}
},
"required": [
"value"
]
}
JSON Schema Types¶
Types, custom field types, and constraints (like max_length) are mapped to the corresponding spec formats in the
following priority order (when there is an equivalent available):
- JSON Schema Core
- JSON Schema Validation
- OpenAPI Data Types
- The standard
formatJSON field is used to define pydantic extensions for more complexstringsub-types.
The field schema mapping from Python / pydantic to JSON Schema is done as follows:
| Python type | JSON Schema Type | Additional JSON Schema | Defined in |
|---|---|---|---|
None
|
null
|
JSON Schema Core | |
Same for type(None) or Literal[None]
|
|||
bool
|
boolean
|
JSON Schema Core | |
str
|
string
|
JSON Schema Core | |
float
|
number
|
JSON Schema Core | |
int
|
integer
|
JSON Schema Validation | |
dict
|
object
|
JSON Schema Core | |
list
|
array
|
{"items": {}}
|
JSON Schema Core |
tuple
|
array
|
{"items": {}}
|
JSON Schema Core |
set
|
array
|
{"items": {}, "uniqueItems": true}
|
JSON Schema Validation |
frozenset
|
array
|
{"items": {}, "uniqueItems": true}
|
JSON Schema Validation |
List[str]
|
array
|
{"items": {"type": "string"}}
|
JSON Schema Validation |
And equivalently for any other sub type, e.g. List[int].
|
|||
Tuple[str, ...]
|
array
|
{"items": {"type": "string"}}
|
JSON Schema Validation |
And equivalently for any other sub type, e.g. Tuple[int, ...].
|
|||
Tuple[str, int]
|
array
|
{"items": [{"type": "string"}, {"type": "integer"}], "minItems": 2, "maxItems": 2}
|
JSON Schema Validation |
| And equivalently for any other set of subtypes. Note: If using schemas for OpenAPI, you shouldn't use this declaration, as it would not be valid in OpenAPI (although it is valid in JSON Schema). | |||
Dict[str, int]
|
object
|
{"additionalProperties": {"type": "integer"}}
|
JSON Schema Validation |
| And equivalently for any other subfields for dicts. Have in mind that although you can use other types as keys for dicts with Pydantic, only strings are valid keys for JSON, and so, only str is valid as JSON Schema key types. | |||
Union[str, int]
|
anyOf
|
{"anyOf": [{"type": "string"}, {"type": "integer"}]}
|
JSON Schema Validation |
| And equivalently for any other subfields for unions. | |||
Enum
|
enum
|
{"enum": [...]}
|
JSON Schema Validation |
| All the literal values in the enum are included in the definition. | |||
SecretStr
|
string
|
{"writeOnly": true}
|
JSON Schema Validation |
SecretBytes
|
string
|
{"writeOnly": true}
|
JSON Schema Validation |
EmailStr
|
string
|
{"format": "email"}
|
JSON Schema Validation |
NameEmail
|
string
|
{"format": "name-email"}
|
Pydantic standard "format" extension |
AnyUrl
|
string
|
{"format": "uri"}
|
JSON Schema Validation |
Pattern
|
string
|
{"format": "regex"}
|
JSON Schema Validation |
bytes
|
string
|
{"format": "binary"}
|
OpenAPI |
Decimal
|
number
|
JSON Schema Core | |
UUID1
|
string
|
{"format": "uuid1"}
|
Pydantic standard "format" extension |
UUID3
|
string
|
{"format": "uuid3"}
|
Pydantic standard "format" extension |
UUID4
|
string
|
{"format": "uuid4"}
|
Pydantic standard "format" extension |
UUID5
|
string
|
{"format": "uuid5"}
|
Pydantic standard "format" extension |
UUID
|
string
|
{"format": "uuid"}
|
Pydantic standard "format" extension |
| Suggested in OpenAPI. | |||
FilePath
|
string
|
{"format": "file-path"}
|
Pydantic standard "format" extension |
DirectoryPath
|
string
|
{"format": "directory-path"}
|
Pydantic standard "format" extension |
Path
|
string
|
{"format": "path"}
|
Pydantic standard "format" extension |
datetime
|
string
|
{"format": "date-time"}
|
JSON Schema Validation |
date
|
string
|
{"format": "date"}
|
JSON Schema Validation |
time
|
string
|
{"format": "time"}
|
JSON Schema Validation |
timedelta
|
number
|
{"format": "time-delta"}
|
Difference in seconds (a float), with Pydantic standard "format" extension
|
| Suggested in JSON Schema repository's issues by maintainer. | |||
Json
|
string
|
{"format": "json-string"}
|
Pydantic standard "format" extension |
IPv4Address
|
string
|
{"format": "ipv4"}
|
JSON Schema Validation |
IPv6Address
|
string
|
{"format": "ipv6"}
|
JSON Schema Validation |
IPvAnyAddress
|
string
|
{"format": "ipvanyaddress"}
|
Pydantic standard "format" extension |
IPv4 or IPv6 address as used in ipaddress module
|
|||
IPv4Interface
|
string
|
{"format": "ipv4interface"}
|
Pydantic standard "format" extension |
IPv4 interface as used in ipaddress module
|
|||
IPv6Interface
|
string
|
{"format": "ipv6interface"}
|
Pydantic standard "format" extension |
IPv6 interface as used in ipaddress module
|
|||
IPvAnyInterface
|
string
|
{"format": "ipvanyinterface"}
|
Pydantic standard "format" extension |
IPv4 or IPv6 interface as used in ipaddress module
|
|||
IPv4Network
|
string
|
{"format": "ipv4network"}
|
Pydantic standard "format" extension |
IPv4 network as used in ipaddress module
|
|||
IPv6Network
|
string
|
{"format": "ipv6network"}
|
Pydantic standard "format" extension |
IPv6 network as used in ipaddress module
|
|||
IPvAnyNetwork
|
string
|
{"format": "ipvanynetwork"}
|
Pydantic standard "format" extension |
IPv4 or IPv6 network as used in ipaddress module
|
|||
StrictBool
|
boolean
|
JSON Schema Core | |
StrictStr
|
string
|
JSON Schema Core | |
ConstrainedStr
|
string
|
JSON Schema Core | |
If the type has values declared for the constraints, they are included as validations. See the mapping for constr below.
|
|||
constr(regex='^text$', min_length=2, max_length=10)
|
string
|
{"pattern": "^text$", "minLength": 2, "maxLength": 10}
|
JSON Schema Validation |
| Any argument not passed to the function (not defined) will not be included in the schema. | |||
ConstrainedInt
|
integer
|
JSON Schema Core | |
If the type has values declared for the constraints, they are included as validations. See the mapping for conint below.
|
|||
conint(gt=1, ge=2, lt=6, le=5, multiple_of=2)
|
integer
|
{"maximum": 5, "exclusiveMaximum": 6, "minimum": 2, "exclusiveMinimum": 1, "multipleOf": 2}
|
|
| Any argument not passed to the function (not defined) will not be included in the schema. | |||
PositiveInt
|
integer
|
{"exclusiveMinimum": 0}
|
JSON Schema Validation |
NegativeInt
|
integer
|
{"exclusiveMaximum": 0}
|
JSON Schema Validation |
NonNegativeInt
|
integer
|
{"minimum": 0}
|
JSON Schema Validation |
NonPositiveInt
|
integer
|
{"maximum": 0}
|
JSON Schema Validation |
ConstrainedFloat
|
number
|
JSON Schema Core | |
If the type has values declared for the constraints, they are included as validations. See the mapping for confloat below.
|
|||
confloat(gt=1, ge=2, lt=6, le=5, multiple_of=2)
|
number
|
{"maximum": 5, "exclusiveMaximum": 6, "minimum": 2, "exclusiveMinimum": 1, "multipleOf": 2}
|
JSON Schema Validation |
| Any argument not passed to the function (not defined) will not be included in the schema. | |||
PositiveFloat
|
number
|
{"exclusiveMinimum": 0}
|
JSON Schema Validation |
NegativeFloat
|
number
|
{"exclusiveMaximum": 0}
|
JSON Schema Validation |
NonNegativeFloat
|
number
|
{"minimum": 0}
|
JSON Schema Validation |
NonPositiveFloat
|
number
|
{"maximum": 0}
|
JSON Schema Validation |
ConstrainedDecimal
|
number
|
JSON Schema Core | |
If the type has values declared for the constraints, they are included as validations. See the mapping for condecimal below.
|
|||
condecimal(gt=1, ge=2, lt=6, le=5, multiple_of=2)
|
number
|
{"maximum": 5, "exclusiveMaximum": 6, "minimum": 2, "exclusiveMinimum": 1, "multipleOf": 2}
|
JSON Schema Validation |
| Any argument not passed to the function (not defined) will not be included in the schema. | |||
BaseModel
|
object
|
JSON Schema Core | |
| All the properties defined will be defined with standard JSON Schema, including submodels. | |||
Color
|
string
|
{"format": "color"}
|
Pydantic standard "format" extension |
Top-level schema generation¶
You can also generate a top-level JSON Schema that only includes a list of models and related
sub-models in its definitions:
import json
from pydantic import BaseModel
from pydantic.schema import schema
class Foo(BaseModel):
a: str = None
class Model(BaseModel):
b: Foo
class Bar(BaseModel):
c: int
top_level_schema = schema([Model, Bar], title='My Schema')
print(json.dumps(top_level_schema, indent=2))
(This script is complete, it should run "as is")
Outputs:
{
"title": "My Schema",
"definitions": {
"Foo": {
"title": "Foo",
"type": "object",
"properties": {
"a": {
"title": "A",
"type": "string"
}
}
},
"Model": {
"title": "Model",
"type": "object",
"properties": {
"b": {
"$ref": "#/definitions/Foo"
}
},
"required": [
"b"
]
},
"Bar": {
"title": "Bar",
"type": "object",
"properties": {
"c": {
"title": "C",
"type": "integer"
}
},
"required": [
"c"
]
}
}
}
Schema customization¶
You can customize the generated $ref JSON location: the definitions are always stored under the key
definitions, but a specified prefix can be used for the references.
This is useful if you need to extend or modify the JSON Schema default definitions location. E.g. with OpenAPI:
import json
from pydantic import BaseModel
from pydantic.schema import schema
class Foo(BaseModel):
a: int
class Model(BaseModel):
a: Foo
# Default location for OpenAPI
top_level_schema = schema([Model], ref_prefix='#/components/schemas/')
print(json.dumps(top_level_schema, indent=2))
(This script is complete, it should run "as is")
Outputs:
{
"definitions": {
"Foo": {
"title": "Foo",
"type": "object",
"properties": {
"a": {
"title": "A",
"type": "integer"
}
},
"required": [
"a"
]
},
"Model": {
"title": "Model",
"type": "object",
"properties": {
"a": {
"$ref": "#/components/schemas/Foo"
}
},
"required": [
"a"
]
}
}
}
It's also possible to extend/override the generated JSON schema in a model.
To do it, use the Config sub-class attribute schema_extra.
For example, you could add examples to the JSON Schema:
from pydantic import BaseModel
class Person(BaseModel):
name: str
age: int
class Config:
schema_extra = {
'examples': [
{
'name': 'John Doe',
'age': 25,
}
]
}
print(Person.schema_json(indent=2))
(This script is complete, it should run "as is")
Outputs:
{
"title": "Person",
"type": "object",
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"age": {
"title": "Age",
"type": "integer"
}
},
"required": [
"name",
"age"
],
"examples": [
{
"name": "John Doe",
"age": 25
}
]
}
For more fine-grained control, you can alternatively set schema_extra to a callable and post-process the generated schema.
The callable can have one or two positional arguments.
The first will be the schema dictionary.
The second, if accepted, will be the model class.
The callable is expected to mutate the schema dictionary in-place; the return value is not used.
For example, the title key can be removed from the model's properties:
from typing import Dict, Any, Type
from pydantic import BaseModel
class Person(BaseModel):
name: str
age: int
class Config:
@staticmethod
def schema_extra(schema: Dict[str, Any], model: Type['Person']) -> None:
for prop in schema.get('properties', {}).values():
prop.pop('title', None)
print(Person.schema_json(indent=2))
from typing import Any
from pydantic import BaseModel
class Person(BaseModel):
name: str
age: int
class Config:
@staticmethod
def schema_extra(schema: dict[str, Any], model: type['Person']) -> None:
for prop in schema.get('properties', {}).values():
prop.pop('title', None)
print(Person.schema_json(indent=2))
(This script is complete, it should run "as is")
Outputs:
{
"title": "Person",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"age": {
"type": "integer"
}
},
"required": [
"name",
"age"
]
}