The latest hypertext version of this document is available on the World Wide Web:
<URL:http://sunsite.unc.edu/boutell/png.html>
Permission is granted to reproduce this specification in complete and unaltered form. Excerpts may be printed with the following notice: "excerpted from the PNG (Portable Network Graphics) specification, ninth draft." No notice is required in software that follows this specification; notice is only required when reproducing or excerpting from the specification itself.
Although the initial motivation for developing PNG was to replace GIF, the design provides some useful new features not available in GIF, with minimal cost to developers.
GIF features retained in PNG include:
See Rationale: Why a new file format?, Why these features?, Why not these features?, Why not use format XYZ?.
See Rationale: Byte order.
Source data with a precision not directly supported in PNG (for example, 5 bit/sample truecolor) must be scaled up to the next higher supported bit depth. Such scaling is reversible and hence incurs no loss of data, while it reduces the number of cases that decoders must cope with. See Recommendations for encoders: Bitdepth scaling.
Three types of pixels are supported:
In all cases, pixels are packed into scanlines consecutively, without wasted space between pixels. (The allowable bit depths are restricted so that the packing is simple and efficient.) When pixels are less than 8 bits deep, they are packed into bytes with the leftmost pixel in the high-order bits of a byte, the rightmost in the low-order bits.
However, scanlines always begin on byte boundaries. When pixels are fewer than 8 bits deep, if the scanline width is not evenly divisible by the number of pixels per byte then the low-order bits in the last byte of each scanline are wasted. The contents of the padding bits added to fill out the last byte of a scanline are unspecified.
An additional "filter" byte is added to the beginning of every scanline, as described in detail below. The filter byte is not considered part of the image data, but it is included in the data stream sent to the compression step.
An alpha channel value of 0 represents full transparency, and a value of (2^bitdepth)-1 represents a fully opaque pixel. Intermediate values indicate partially transparent pixels that may be combined with a background image to yield a composite image.
Alpha channels may be included with images that have either 8 or 16 bits per sample, but not with images that have fewer than 8 bits per sample. Alpha values are represented with the same bit depth used for the image values. The alpha value is stored immediately following the grayscale or RGB values of the pixel.
The color stored for a pixel is not affected by the alpha value assigned to the pixel. This rule is sometimes called "unassociated" or "non premultiplied" alpha. (Another common technique is to store pixel values premultiplied by the alpha fraction; in effect, the image is already composited against a black background. PNG does not use premultiplied alpha, since it precludes lossless storage.)
Transparency control is also possible without the storage cost of a full alpha channel. In a palette image, an alpha value may be defined for each palette entry. In grayscale and truecolor images, a single pixel value may be identified as being "transparent". These techniques are controlled by the tRNS ancillary chunk type.
If no alpha channel nor tRNS chunk is present, all pixels in the image are to be treated as fully opaque.
Viewers may support transparency control partially, or not at all. See Recommendations for encoders: Alpha channel creation and Recommendations for decoders: Alpha channel processing.
PNG defines several different filter algorithms, including "none" which indicates no filtering. The filter algorithm is specified for each scanline by a filter type byte which precedes the filtered scanline in the precompression data stream. An intelligent encoder may switch filters from one scanline to the next. The method for choosing which filter to employ is up to the encoder.
See Rationale: Filtering.
With interlace type 0, pixels are stored sequentially from left to right, and scanlines sequentially from top to bottom (no interlacing).
Interlace type 1, known as Adam7 after its author, Adam M. Costello, consists of seven distinct passes over the image. Each pass transmits a subset of the pixels in the image. The pass in which each pixel is transmitted is defined by replicating the following 8-by-8 pattern over the entire image, starting at the upper left corner:
1 6 4 6 2 6 4 6 7 7 7 7 7 7 7 7 5 6 5 6 5 6 5 6 7 7 7 7 7 7 7 7 3 6 4 6 3 6 4 6 7 7 7 7 7 7 7 7 5 6 5 6 5 6 5 6 7 7 7 7 7 7 7 7Within each pass, the selected pixels are transmitted left to right within a scanline, and selected scanlines sequentially from top to bottom. For example, pass 2 contains pixels 4, 12, 20, etc. of scanlines 0, 8, 16, etc. (numbering from 0,0 at the upper left corner). The last pass contains the entirety of scanlines 1, 3, 5, etc. This selection process is called "decimating" the image.
The data within each pass is laid out as though it were a complete image of the appropriate dimensions. For example, if pixels are fewer than 8 bits deep, each decimated scanline is padded to fill an integral number of bytes (see Image layout). Filtering is done on the decimated scanlines in the usual way, and a filter type byte is transmitted before each one (see Filter algorithms). Note that the transmission order is defined in such a way that each pass will have the same number of pixels on each decimated scanline that it transmits; this is necessary for convenient application of some of the filters.
Caution: If the image contains fewer than five columns or fewer than five rows, some passes may be entirely empty. Encoder and decoder authors should be careful to handle this case correctly.
For progressive display, it is recommended that decoders expand each received pixel to a rectangle covering the yet-to-be-transmitted pixel positions below and to the right of the received pixel. This process can be described by the following pseudocode:
Starting_Row [1..7] = { 0, 0, 4, 0, 2, 0, 1 } Starting_Col [1..7] = { 0, 4, 0, 2, 0, 1, 0 } Row_Increment [1..7] = { 8, 8, 8, 4, 4, 2, 2 } Col_Increment [1..7] = { 8, 8, 4, 4, 2, 2, 1 } Block_Height [1..7] = { 8, 8, 4, 4, 2, 2, 1 } Block_Width [1..7] = { 8, 4, 4, 2, 2, 1, 1 } pass := 1 while pass <= 7 begin row := Starting_Row[pass] while row < height begin col := Starting_Col[pass] while col < width begin visit (row, col, min (Block_Height[pass], height - row), min (Block_Width[pass], width - col)) col := col + Col_Increment[pass] end row := row + Row_Increment[pass] end pass := pass + 1 endHere, the function "visit(row,column,height,width)" obtains the next transmitted pixel and paints a rectangle of the specified height and width, whose upper-left corner is at the specified row and column, using the color indicated by the pixel. Note that row and column are measured from 0,0 at the upper left corner.
See Rationale: Interlacing.
obright = bright ^ gammaPNG images may specify the gamma of the camera (or simulated camera) that produced the image, and thus the gamma of the image with respect to the original scene. To get accurate tone reproduction, the gamma of the display device and the gamma of the image file should be reciprocals of each other, since the overall gamma of the system is the product of the gammas of each component. So, for example, if an image with a gamma of 0.4 is displayed on a CRT with a gamma of 2.5, the overall gamma of the system is 1.0. An overall gamma of 1.0 gives correct tone reproduction.
In practice, images of gamma around 1.0 and gamma around 0.45 are both widely found. PNG expects encoders to record the gamma if known, and it expects decoders to correct the image gamma if necessary for proper display on their display hardware. Failure to correct for image gamma leads to a too-dark or too-light display.
Gamma correction is not applied to the alpha channel, if any. Alpha values always represent a linear fraction of full opacity.
See Rationale: Why gamma correction?, Recommendations for encoders: Encoder gamma handling, and Recommendations for decoders: Decoder gamma handling.
ISO 8859-1 (Latin-1) is the character set recommended for use in text strings. This character set is a superset of 7-bit ASCII.
Character codes not defined in Latin-1 may be used, but are unlikely to port across platforms correctly. (For that matter, any characters beyond 7-bit ASCII will not display correctly on all platforms; but Latin-1 represents a set which is widely portable.)
Provision is also made for the storage of compressed text.
See Rationale: Text strings.
137 80 78 71 13 10 26 10
This signature indicates that the remainder of the file contains a single PNG image, consisting of a series of chunks beginning with an IHDR chunk and ending with an IEND chunk.
See Rationale: PNG file signature.
Chunks may appear in any order, subject to the restrictions placed on each chunk type. (One notable restriction is that IHDR must appear first and IEND must appear last; thus the IEND chunk serves as an end-of-file marker.) Multiple chunks of the same type may appear, but only if specifically permitted for that type.
See Rationale: Chunk layout.
Four bits of the type code, namely bit 5 (value 32) of each byte, are used to convey chunk properties. This choice means that a human can read off the assigned properties according to whether each letter of the type code is uppercase (bit 5 is 0) or lowercase (bit 5 is 1). However, decoders should test the properties of an unknown chunk by numerically testing the specified bits; testing whether a character is uppercase or lowercase is inefficient, and even incorrect if a locale-specific case definition is used.
It is also worth noting that the property bits are an inherent part of the chunk name, and hence are fixed for any chunk type. Thus, TEXT and Text are completely unrelated chunk type codes. Decoders should recognize codes by simple four-byte literal comparison; it is incorrect to perform case conversion on type codes.
The semantics of the property bits are:
Chunks which are critical to the successful display of the file's contents are called "critical" chunks. Decoders encountering an unknown chunk in which the ancillary bit is 0 must indicate to the user that the image contains information they cannot safely interpret. The image header chunk (IHDR) is an example of a critical chunk.
If a chunk's safe-to-copy bit is 1, the chunk may be copied to a modified PNG file whether or not the software recognizes the chunk type, and regardless of the extent of the file modifications.
If a chunk's safe-to-copy bit is 0, it indicates that the chunk depends on the image data. If the program has made any changes to critical chunks, including addition, modification, deletion, or reordering of critical chunks, then unrecognized unsafe chunks must not be copied to the output PNG file. (Of course, if the program does recognize the chunk, it may choose to output an appropriately modified version.)
A PNG file modifier is always allowed to copy all unrecognized chunks if it has only added, deleted, or modified ancillary chunks. This implies that it is not permissible to make ancillary chunks that depend on other ancillary chunks.
PNG file modifiers which do not recognize a critical chunk should report an error and refuse to process that PNG file at all. The safe/unsafe mechanism is intended for use with ancillary chunks. The safe-to-copy bit will always be 0 for critical chunks.
See Rationale: Chunk naming conventions.
x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x+1The 32-bit CRC is initialized to all 1's, and then the data from each byte is processed from the least significant bit (1) to the most significant bit (128). The 1's complement of the CRC is transmitted (stored in the file) MSB first. For the purpose of separating into bytes and ordering, the least significant bit of the 32-bit CRC is defined to be the coefficient of the x^31 term.
Practical calculation of the CRC always employs a precalculated table to greatly accelerate the computation. See Appendix: Sample CRC Code.
Width: 4 bytes Height: 4 bytes Bit depth: 1 byte Color type: 1 byte Compression type: 1 byte Filter type: 1 byte Interlace type: 1 byteWidth and height give the image dimensions in pixels. They are 4-byte integers. Zero is an invalid value. The maximum for each is (2^31)-1 in order to accommodate languages which have difficulty with unsigned 4-byte values.
Bit depth is a single-byte integer giving the number of bits per pixel (for palette images) or per sample (for grayscale and truecolor images). Valid values are 1, 2, 4, 8, and 16, although not all values are allowed for all color types.
Color type is a single-byte integer that describes the interpretation of the image data. Color type values represent sums of the following values: 1 (palette used), 2 (color used), and 4 (full alpha used). Valid values are 0, 2, 3, 4, and 6.
Bit depth restrictions for each color type are imposed both to simplify implementations and to prohibit certain combinations that do not compress well in practice. Decoders must support all legal combinations of bit depth and color type. (Note that bit depths of 16 are easily supported on 8-bit display hardware by dropping the least significant byte.)
Color Allowed Interpretation Type Bit Depths 0 1,2,4,8,16 Each pixel value is a grayscale level. 2 8,16 Each pixel value is an R,G,B series. 3 1,2,4,8 Each pixel value is a palette index; a PLTE chunk must appear. 4 8,16 Each pixel value is a grayscale level, followed by an alpha channel level. 6 8,16 Each pixel value is an R,G,B series, followed by an alpha channel level.Note that an alpha channel, where present, is represented by either one byte per pixel (when bit depth=8) or two bytes per pixel (when bit depth=16).
Compression type is a single-byte integer that indicates the method used to compress the image data. Compression methods are defined in a later chapter. At present, only compression type 0 (deflate/inflate compression with a 32K sliding window) is defined. All standard PNG images must be compressed with this scheme. The compression type code is provided for possible future expansion or proprietary variants. Decoders must check this byte and report an error if it holds an unrecognized code.
Filter type is a single-byte integer that indicates the preprocessing method applied to the image data before compression. Filtering methods are defined in a later chapter. At present, only filter type 0 (adaptive filtering with five basic filter types) is defined. As with the compression type code, decoders must check this byte and report an error if it holds an unrecognized code. See Filter algorithms for details.
Interlace type is a single-byte integer that indicates the transmission order of the pixel data. Two values are currently defined: 0 (no interlace) or 1 (Adam7 interlace). See Interlaced data order for details.
red: 1 byte (0 = black, 255 = red) green: 1 byte (0 = black, 255 = green) blue: 1 byte (0 = black, 255 = blue)The number of entries is determined from the chunk length. A chunk length not divisible by 3 is an error.
This chunk must appear for color type 3, and may appear for color types 2 and 6. If this chunk does appear, it must precede the first IDAT chunk.
For color type 3 (palette data), the PLTE chunk is required. The first entry in PLTE is referenced by pixel value 0, the second by pixel value 1, etc. The number of palette entries must not exceed the range that can be represented by the bit depth (for example, 2^4 = 16 for a bit depth of 4). It is permissible to have fewer entries than the bit depth would allow. In that case, any out-of-range pixel value found in the image data is an error.
For color types 2 and 6 (truecolor), the PLTE chunk is optional. If present, it provides a recommended set of from 1 to 256 colors to which the truecolor image may be quantized if the viewer cannot display truecolor directly. If PLTE is not present, such a viewer must select colors on its own, but it is often preferable for this to be done once by the encoder.
Note that the palette uses 8 bits (1 byte) per value regardless of the image bit depth specification. In particular, the palette is 8 bits deep even when it is a suggested quantization of a 16-bit truecolor image.
There may be multiple IDAT chunks; if so, they must appear consecutively with no other intervening chunks. The compressed data stream is the concatenation of the contents of all the IDAT chunks. The encoder may divide the compressed data stream into chunks as it wishes; chunk boundaries have no semantic significance. (Multiple IDAT chunks are allowed so that encoders can work in a fixed amount of memory; typically the chunk size will correspond to the encoder's buffer size.)
Image gamma value: 4 bytesThe gamma correction chunk specifies the gamma of the camera (or simulated camera) that produced the image, and thus the gamma of the image with respect to the original scene. Note that this is not the same as the gamma of the display device that will reproduce the image correctly.
A value of 100000 represents a gamma of 1.0, a value of 45000 a gamma of 0.45, and so on (divide by 100000.0). Values around 1.0 and around 0.45 are common in practice.
If the encoder does not know the gamma value, it should not write a gamma chunk; the absence of a gamma chunk indicates the gamma is unknown.
If the gAMA chunk appears, it must precede the first IDAT chunk, and it must also precede the PLTE chunk if present.
See Gamma correction.
For color type 0 (grayscale), the sBIT chunk contains a single byte, indicating the number of bits which were significant in the source data.
For color type 2 (RGB truecolor), the sBIT chunk contains three bytes, indicating the number of bits which were significant in the source data for the red, green, and blue channels, respectively.
For color type 3 (palette color), the sBIT chunk contains three bytes, indicating the number of bits which were significant in the source data for the red, green, and blue components of the palette entries, respectively.
For color type 4 (grayscale with alpha channel), the sBIT chunk contains two bytes, indicating the number of bits which were significant in the source grayscale data and the source alpha channel data, respectively.
For color type 6 (RGB truecolor with alpha channel), the sBIT chunk contains four bytes, indicating the number of bits which were significant in the source data for the red, green, blue and alpha channels, respectively.
If the sBIT chunk appears, it must precede the first IDAT chunk, and it must also precede the PLTE chunk if present.
The chunk stores eight values, each encoded as a 4-byte integer, representing the X or Y value times 100000. They are stored in the order White Point X, White Point Y, Red X, Red Y, Green X, Green Y, Blue X, Blue Y.
If the cHRM chunk appears, it must precede the first IDAT chunk, and it must also precede the PLTE chunk if present.
For color type 3 (palette), this chunk's contents are a series of alpha channel bytes, corresponding to palette indexes in the PLTE chunk. Each entry indicates that pixels of that palette index should be treated as having the specified alpha value. Alpha values have the same interpretation as in an 8-bit full alpha channel: 0 is fully transparent, 255 is fully opaque, regardless of image bit depth. The tRNS chunk may contain fewer alpha channel bytes than there are palette entries. In this case, the alpha channel value for all remaining palette entries is assumed to be 255. In the common case where only palette index 0 need be made transparent, only a one-byte tRNS chunk is needed. The tRNS chunk may not contain more bytes than there are palette entries.
For color type 0 (grayscale), the tRNS chunk contains a single gray level value, stored in the format
gray: 2 bytes, range 0 .. (2^bitdepth) - 1(For consistency, 2 bytes are used regardless of the image bit depth.) Pixels of the specified gray level are to be treated as transparent (equivalent to alpha value 0); all other pixels are to be treated as fully opaque (alpha value (2^bitdepth)-1).
For color type 2 (RGB), the tRNS chunk contains a single RGB color value, stored in the format
red: 2 bytes, range 0 .. (2^bitdepth) - 1 green: 2 bytes, range 0 .. (2^bitdepth) - 1 blue: 2 bytes, range 0 .. (2^bitdepth) - 1(For consistency, 2 bytes per sample are used regardless of the image bit depth.) Pixels of the specified color value are to be treated as transparent (equivalent to alpha value 0); all other pixels are to be treated as fully opaque (alpha value (2^bitdepth)-1).
tRNS is prohibited for color types 4 and 6, since a full alpha channel is already present in those cases.
When present, the tRNS chunk must precede the first IDAT chunk, and must follow the PLTE chunk, if any.
For color type 3 (palette), the bKGD chunk contains:
palette index: 1 byteThe value is the palette index of the color to be used as background.
For color types 0 and 4 (grayscale, with or without alpha), bKGD contains:
gray: 2 bytes, range 0 .. (2^bitdepth) - 1(For consistency, 2 bytes are used regardless of the image bit depth.) The value is the gray level to be used as background.
For color types 2 and 6 (RGB, with or without alpha), bKGD contains:
red: 2 bytes, range 0 .. (2^bitdepth) - 1 green: 2 bytes, range 0 .. (2^bitdepth) - 1 blue: 2 bytes, range 0 .. (2^bitdepth) - 1(For consistency, 2 bytes per sample are used regardless of the image bit depth.) This is the RGB color to be used as background.
When present, the bKGD chunk must precede the first IDAT chunk, and must follow the PLTE chunk, if any.
See Recommendations for decoders: Background color.
This chunk's contents are a series of 2-byte (16 bit) unsigned integers. There must be exactly one entry for each entry in the PLTE chunk. Each entry is proportional to the fraction of pixels in the image that have that palette index; the exact scale factor is chosen by the encoder.
Histogram entries are approximate, with the exception that a zero entry specifies that the corresponding palette entry is not used at all in the image. It is required that a histogram entry be nonzero if there are any pixels of that color.
When the palette is a suggested quantization of a truecolor image, the histogram is necessarily approximate, since a decoder may map pixels to palette entries differently than the encoder did. In this situation, zero entries should not appear.
The hIST chunk, if it appears, must follow the PLTE chunk, and must precede the first IDAT chunk.
See Rationale: Palette histograms, and Recommendations for decoders: Palette histogram usage.
Any number of tEXt chunks may appear, and more than one with the same keyword is permissible.
The keyword indicates the type of information represented by the string. The following keywords are predefined and should be used where appropriate:
Title Short (one line) title or caption for image Author Name of image's creator Copyright Copyright notice Description Description of image (possibly long) Software Software used to create the image Disclaimer Legal disclaimer Warning Warning of nature of content Source Device used to create the image Comment Miscellaneous comment; conversion from GIF commentOther keywords, containing any sequence of printable characters in the character set, may be invented for other purposes. Keywords of general interest may be registered with the maintainers of the PNG specification.
Keywords must be spelled exactly as registered, so that decoders may use simple literal comparisons when looking for particular keywords. In particular, keywords are considered case-sensitive.
See Recommendations for encoders: Text chunk processing and Recommendations for decoders: Text chunk processing.
A zTXt chunk begins with an uncompressed Latin-1 keyword followed by a null (0) character, just as in the tEXt chunk. The next byte after the null contains a compression type byte, for which the only legitimate value is presently zero (deflate/inflate compression). The compression-type byte is followed by a compressed Latin-1 text data stream which makes up the remainder of the chunk. The compressed data stream uses the same Ziplib format specified below under Deflate/inflate compression specification.
Any number of zTXt and tEXt chunks may appear in the same file. See the preceding definition of the tEXt chunk for the predefined keywords and the exact format of the text.
See Recommendations for encoders: Text chunk processing and Recommendations for decoders: Text chunk processing.
4 bytes: pixels per unit, X axis (unsigned integer) 4 bytes: pixels per unit, Y axis (unsigned integer) 1 byte: unit specifierThe following values are legal for the unit specifier:
0: unit is unknown (pHYs defines pixel aspect ratio only) 1: unit is the meterConversion note: one inch is equal to exactly 0.0254 meters.
If this ancillary chunk is not present, pixels are assumed to be square, and the physical size of each pixel is unknown.
If present, this chunk must precede the first IDAT chunk.
See Recommendations for decoders: Pixel dimensions.
4 bytes: image position on the page, X axis (signed integer) 4 bytes: image position on the page, Y axis (signed integer) 1 byte: unit specifierBoth position values are signed. The following values are legal for the unit specifier:
0: unit is the pixel (true dimensions unknown) 1: unit is the micrometer (also known as the micron; 1/1,000,000th of a meter)Conversion note: one inch is equal to exactly 25,400 micrometers.
This chunk gives the position on a printed page at which the image should be output when printed alone. The X position is measured rightwards from the left side of the page to the left side of the image; the Y position is measured downwards from the top side of the page to the top of the image.
If present, this chunk must precede the first IDAT chunk.
2 bytes: Year (complete; for example, 1995) 1 byte: Month (1-12) 1 byte: Day (1-31) 1 byte: Hour (0-23) 1 byte: Minute (0-59) 1 byte: Second (0-60) (yes, 60, for leap seconds; not 61, a common error)This chunk gives the time of the last image modification. Universal Time (UTC, also called GMT) should be specified rather than local time.
Name Multiple Ordering constraints OK? IHDR No Must be first PLTE No Before IDAT IDAT Yes Multiple IDATs must be consecutive IEND No Must be last gAMA No Before PLTE, IDAT sBIT No Before PLTE, IDAT cHRM No Before PLTE, IDAT tRNS No After PLTE; before IDAT bKGD No After PLTE; before IDAT hIST No After PLTE; before IDAT tEXt Yes zTXt Yes pHYs No Before IDAT oFFs No Before IDAT tIME No
Standard keywords for tEXt and zTXt chunks:
Title Short (one line) title or caption for image Author Name of image's creator Copyright Copyright notice Description Description of image (possibly long) Software Software used to create the image Disclaimer Legal disclaimer Warning Warning of nature of content Source Device used to create the image Comment Miscellaneous comment; conversion from GIF comment
A formal, detailed specification of inflate and deflate
is being written at this time, and will be included in this
document when finalized. The current draft of the inflate/deflate
specification is now available by
anonymous FTP from
quest.jpl.nasa.gov
in the directory
/beta/ziplib/
.
The compressed data stream will be stored in the Ziplib format, the
specification of which is also being written at this time. The current
draft of the Ziplib specification is also available by
anonymous FTP from
quest.jpl.nasa.gov
in the directory /beta/ziplib/
.
PNG defines five basic filtering algorithms, which are given numeric codes as follows:
Code Name 0 None 1 Sub 2 Up 3 Average 4 PaethThe encoder may choose which algorithm to apply on a scanline-by-scanline basis. In the image data sent to the compression step, each scanline is preceded by a filter type byte containing the numeric code of the filter algorithm used for that scanline.
Filtering algorithms are applied to bytes, not to pixels, regardless of the bit depth or color type of the image. The filtering algorithms work on the byte sequence formed by a scanline that has been represented as described previously (Image layout).
When the image is interlaced, each pass of the interlace pattern is treated as an independent image for filtering purposes. The filters work on the byte sequences formed by the pixels actually transmitted during a pass, and the "previous scanline" is the one previously transmitted in the same pass, not the one adjacent in the complete image. Note that the subimage transmitted in any one pass is always rectangular, but is of smaller width and/or height than the complete image.
For all filters, the bytes "to the left of" the first pixel in a scanline must be treated as being zero. For filters that refer to the prior scanline, the entire prior scanline must be treated as being zeroes for the first scanline of an image (or of a pass of an interlaced image).
To reverse the effect of a filter, the decoder must use the decoded values of the prior pixel on the same line, the pixel immediately above the current pixel on the prior line, and the pixel just to the left of the pixel above. This implies that at least one scanline's worth of image data must be stored by the decoder at all times. Note that although some filter types do not refer to the prior scanline, the decoder must always store each scanline as it is decoded, since the next scanline might use a filter that refers to it.
PNG imposes no restriction on which filter types may be applied to an image. However, the filters are not equally effective on all types of data. See Recommendations for encoders: Filter selection.
See also Rationale: Filtering.
Apply the following formula to each byte of each scanline, where x ranges from zero to the number of bytes representing that scanline minus one (1), and Raw(x) refers to the raw data byte at that byte position in the scanline:
Sub(x) = Raw(x) - Raw(x-bpp)
Note this is done for each byte, regardless of bit depth. Unsigned arithmetic modulo 256 is used, so that both the inputs and outputs fit into bytes. The sequence of Sub values is transmitted as the filtered scanline.
bpp is defined as the number of bytes per complete pixel, rounding up to one (1). For example, for color type 2 with a bit depth of 16, bpp is equal to 6 (three channels, two bytes per channel); for color type 0 with a bit depth of 2, bpp is equal to 1 (rounding up); for color type 4 with a bit depth of 16, bpp is equal to 4 (two-byte grayscale value, plus two-byte alpha channel).
Important: for all x < 0, assume Raw(x) = 0.
To reverse the effect of the Sub filter after decompression, output the following value:
Sub(x) + Raw(x-bpp)
(computed mod 256), where Raw refers to the bytes already decoded.
Apply the following formula to each byte of each scanline, where x ranges from zero to the number of bytes representing that scanline minus one (1), and Raw(x) refers to the raw data byte at that byte position in the scanline:
Up(x) = Raw(x) - Prior(x)
where Prior refers to the unfiltered bytes of the prior scanline.
Note this is done for each byte, regardless of bit depth. Unsigned arithmetic modulo 256 is used, so that both the inputs and outputs fit into bytes. The sequence of Up values is transmitted as the filtered scanline.
Important: on the first scanline of an image (or of a pass of an interlaced image), assume Prior(x) = 0 for all x.
To reverse the effect of the Up filter after decompression, output the following value:
Up(x) + Prior(x)
(computed mod 256), where Prior refers to the decoded bytes of the prior scanline.
Apply the following formula to each byte of each scanline, where x ranges from zero to the number of bytes representing that scanline minus one (1), and Raw(x) refers to the raw data byte at that byte position in the scanline:
Average(x) = Raw(x) - floor((Raw(x-bpp)+Prior(x))/2)
where Prior refers to the unfiltered bytes of the prior scanline, and bpp is defined as for the Sub filter.
Note this is done for each byte, regardless of bit depth. The sequence of Average values is transmitted as the filtered scanline.
The subtraction of the predicted value from the raw byte must be done modulo 256, so that both the inputs and outputs fit into bytes. However, the sum Raw(x-bpp)+Prior(x) must be formed without overflow (using at least nine-bit arithmetic). floor() indicates that the result of the division is rounded to the next lower integer if fractional; in other words, it is an integer division or right shift operation.
Important: for all x < 0, assume Raw(x) = 0. On the first scanline of an image (or of a pass of an interlaced image), assume Prior(x) = 0 for all x.
To reverse the effect of the Average filter after decompression, output the following value:
Average(x) + floor((Raw(x-bpp)+Prior(x))/2)
where the result is computed mod 256, but the prediction is calculated in the same way as for encoding. Raw refers to the bytes already decoded, and Prior refers to the decoded bytes of the prior scanline.
Apply the following formula to each byte of each scanline, where x ranges from zero to the number of bytes representing that scanline minus one (1), and Raw(x) refers to the raw data byte at that byte position in the scanline:
Paeth(x) = Raw(x) - PaethPredictor(Raw(x-bpp),Prior(x),Prior(x-bpp))
where Prior refers to the unfiltered bytes of the prior scanline, and bpp is defined as for the Sub filter.
Note this is done for each byte, regardless of bit depth. Unsigned arithmetic modulo 256 is used, so that both the inputs and outputs fit into bytes. The sequence of Paeth values is transmitted as the filtered scanline.
The PaethPredictor function is defined by the following pseudocode:
function PaethPredictor (a, b, c) begin ; a = left, b = above, c = upper left p := a + b - c ; initial estimate pa := abs(p - a) ; distances to a, b, c pb := abs(p - b) pc := abs(p - c) ; return nearest of a,b,c, ; breaking ties in order a,b,c. if pa <= pb AND pa <= pc begin return a end if pb <= pc begin return b end return c endThe calculations within the PaethPredictor function must be performed exactly, without overflow. Arithmetic modulo 256 is to be used only for the final step of subtracting the function result from the target pixel value.
Note that the order in which ties are broken is fixed and must not be altered. The tie break order is: pixel to the left, pixel above, pixel to the upper left. (This order differs from that given in Paeth's article.)
Important: for all x < 0, assume Raw(x) = 0 and Prior(x) = 0. On the first scanline of an image (or of a pass of an interlaced image), assume Prior(x) = 0 for all x.
To reverse the effect of the Paeth filter after decompression, output the following value:
Paeth(x) + PaethPredictor(Raw(x-bpp),Prior(x),Prior(x-bpp))
(computed mod 256), where Raw and Prior refer to bytes already decoded. Exactly the same PaethPredictor function is used by both encoder and decoder.
New proprietary chunks will be only be registered if they are of use to others and do not violate the design philosophy of PNG. Chunk registration is not automatic, although it is the intent of the authors that it be straightforward when a new chunk of potentially wide application in one or several fields is needed.
If you do not require or desire that others outside your organization understand the chunk type, you may use a private chunk name by specifying a lowercase letter for the second character.
Please note that if you want to use a private chunk for information that is not essential to view the image, and have any desire whatsoever that others not using your internal viewer software be able to view the image, you should use an ancillary chunk type (first character is lowercase) rather than a critical chunk type (first character uppercase).
Also note that others may use the same private chunk name, so it is advantageous to keep additional identifying information at the beginning of the chunk data.
If an ancillary chunk is to contain textual information that might be of interest to a human user, it is recommended that a special chunk type not be used. Instead use a tEXt chunk and define a suitable keyword. In this way, the information will be available to users not using your software.
New keywords for tEXt chunks may be registered with the maintainers of the PNG specification. Keywords should be reasonably self-explanatory.
For example, if 5 bits per channel are available in the source data, conversion to a bitdepth of 8 can be achieved as follows.
If the value for a sample in the source data is 27 (in a range from 0-31), then the original bits are:
4 3 2 1 0 --------- 1 1 0 1 1Converted to a bitdepth of 8, the best value is 222:
7 6 5 4 3 2 1 0 ---------------- 1 1 0 1 1 1 1 0 |=======| |===| | Leftmost Bits Repeated to Fill Open Bits | Original BitsNote that this scaling can be reversed simply by shifting right.
Scaling by simply shifting left by three bits is incorrect, since the resulting data would have a range less than the desired full range (continuing the example, 248 = 11111000 is not full brightness).
It is recommended that the sBIT chunk be included when bitdepth scaling has been performed, so that decoders will know the original data depth.
A linear brightness level, expressed as a floating-point value in the range 0 to 1, may be converted to a gamma-corrected pixel value by
gbright := bright ^ gamma pixelval := ROUND(gbright * MAXPIXVAL)Computer graphics renderers often do not perform gamma encoding, instead making pixel values directly proportional to scene brightness. This "linear" pixel encoding is equivalent to gamma encoding with a gamma of 1.0, so graphics programs that produce linear pixels should always put out a gAMA chunk specifying a gamma of 1.0.
It is not recommended that encoders attempt to convert supplied images to a different gamma. Store the data in the file without conversion, and record the source gamma. Gamma conversion at encode time is a bad idea because gamma adjustment of digital pixel data is inherently lossy, due to roundoff error (8 or so bits is not really enough accuracy). Thus encode-time conversion permanently degrades the image. Worse, if the eventual decoder wants the data with some other gamma, then two conversions occur, each introducing roundoff error. Better to store the data losslessly and incur at most one conversion when the image is finally displayed.
Gamma does not apply to alpha channel values; alpha is always represented linearly.
For applications that do not require a full alpha channel, or cannot afford the price in compression efficiency, the tRNS transparency chunk is also available.
If the image has a known background color, this color should be written in the bKGD chunk. Even viewers that ignore transparency may use the bKGD color to fill unused screen area.
Filter type 0 is also recommended for images of bit depths less than 8. For low-bit-depth grayscale images, it may be a net win to expand the image to 8-bit representation and apply filtering, but this is rare.
For truecolor and grayscale images, any of the five filters may prove the most effective. If an encoder wishes to use a fixed filter choice, the Paeth filter is most likely to be the best.
For best compression of truecolor and grayscale images, we recommend an adaptive filtering approach in which a filter is chosen for each scanline. The following simple heuristic has performed well in early tests: compute the output scanline using all five filters, and select the filter which gives the smallest sum of absolute values of outputs. (Consider the output bytes as signed differences for this test.) This method usually outperforms any single fixed filter choice. However, it is likely that much better heuristics will be found as more experience is gained with PNG.
Filtering according to these recommendations is effective on interlaced as well as noninterlaced images.
Encoders should discourage the creation of single lines of text longer than 79 characters, in order to facilitate easy reading.
It is strongly recommended that decoders verify the CRC on each chunk.
For known-length chunks such as IHDR, decoders should treat an unexpected chunk length as an error. Future extensions to this specification will not add new fields to existing chunks; instead, new chunk types will be added to carry any new information.
Unexpected values in fields of known chunks (for example, an unexpected compression type in the IHDR chunk) should be checked for and treated as errors.
Conversely, viewers running on display hardware with non-square pixels are strongly encouraged to rescale images for proper display.
A simple, fast way of doing this is to reduce the image to a fixed palette. Palettes with uniform color spacing ("color cubes") are usually used to minimize the per-pixel computation. For photograph-like images, dithering is recommended to avoid ugly contours in what should be smooth gradients; however, dithering introduces graininess which may be objectionable.
The quality of rendering can be improved substantially by using a palette chosen specifically for the image, since a color cube usually has numerous entries that are unused in any particular image. This approach requires more work, first in choosing the palette, and second in mapping individual pixels to the closest available color. PNG allows the encoder to supply a suggested palette in a PLTE chunk, but not all encoders will do so, and the suggested palette may be unsuitable in any case (it may have too many or too few colors). High-quality viewers will therefore need to have a palette selection routine at hand. A large lookup table is usually the most feasible way of mapping individual pixels to palette entries with adequate speed.
Numerous implementations of color quantization are available. The PNG reference implementation will include code for the purpose.
gbright := pixelval / MAXPIXVAL bright := gbright ^ (1.0 / file_gamma) gcvideo := bright ^ (1.0 / display_gamma) fbval := ROUND(gcvideo * MAXFBVAL)where MAXPIXVAL is the maximum pixel value in the file (255 for 8-bit, 65535 for 16-bit, etc), MAXFBVAL is the maximum value of a frame buffer pixel (255 for 8-bit, 31 for 5-bit, etc), pixelval is the value of the pixel in the PNG file, and fbval is the value to write into the frame buffer. The first line converts from pixel code into a normalized 0 to 1 floating point value, the second undoes the encoding of the image file to produce a linear brightness value, the third line pre-corrects for the monitor's gamma response, and the fourth converts to an integer frame buffer pixel. In practice the second and third lines are merged into
gcvideo := gbright ^ (1.0 / (file_gamma * display_gamma))so as to perform only one power calculation.
(Note that this assumes that you want the final image to have a gamma of 1.0 relative to the original scene. Sometimes it looks better to make the overall gamma a bit higher, perhaps 1.25. To get this, replace the first "1.0" in the formula above with "desired_system_gamma".)
It is not necessary to perform transcendental math for every pixel! Instead, compute a lookup table that gives the correct output value for every pixel value. This requires only 256 calculations per image (for 8-bit accuracy), not one calculation per pixel. For palette-based images, a one-time correction of the palette is sufficient.
In some cases even computing a gamma lookup table may be a concern. In these cases, viewers are encouraged to have precomputed gamma correction tables for file_gamma values of 1.0 and 0.45 and some reasonable single display_gamma value, and to use the table closest to the gamma indicated in the file. This will produce acceptable results for the majority of real files.
In practice, it is often difficult to determine the gamma of the actual display. It is common to assume a display gamma of 2.2 (or 1.0, on hardware for which this value is common) and allow the user to modify this value at their option.
Similarly, when the incoming image has unknown gamma (no gAMA chunk), choose a likely default value, but allow the user to select a new one if the result proves too dark or too light.
Finally, note that the response of real displays is actually more complex than can be described by a single number (display_gamma). If actual measurements of the monitor's light output as a function of voltage input are available, the third and fourth lines of the computation above may be replaced by a lookup in these measurements, to find the actual frame buffer value that most nearly gives the desired brightness.
Viewers which cannot blend colors smoothly with the background should interpret all nonzero alpha values as fully opaque (no background). This case is reasonably simple to implement: transparent pixels are replaced by the background color, others are unchanged.
If a viewer has no particular background against which to present an image, it may ignore the alpha channel or tRNS chunk. (But alpha channel values must still be properly skipped over when reading the image data.)
However, if the background color has been set with the bKGD chunk, the alpha channel can be meaningfully interpreted with respect to it even in a standalone image viewer.
If no histogram chunk is provided, a decoder can of course develop its own, at the cost of an extra pass over the image data.
Decoders should be prepared to display text chunks which contain any number of printing characters between newline characters, even though encoders are encouraged to avoid creating lines in excess of 79 characters.
This appendix gives the reasoning behind some of the design decisions in PNG. Many of these decisions were the subject of considerable debate. The authors freely admit that another group might have made different decisions; however, we believe that our choices are defensible and consistent.
We have also addressed some of the widely known shortcomings of GIF. In particular, PNG supports truecolor images. We know of no widely used image format that losslessly compresses truecolor images as effectively as PNG does. We hope that PNG will make use of truecolor images more practical and widespread.
Some form of transparency control is desirable for applications in which images are displayed against a background or together with other images. GIF provided a simple transparent-color specification for this purpose. PNG supports a full alpha channel as well as transparent-color specifications. This allows both highly flexible transparency and compression efficiency.
Robustness against transmission errors has been an important consideration. For example, images transferred across Internet are often mistakenly processed as text, leading to file corruption. PNG is designed so that such errors can be detected quickly and reliably.
PNG has been expressly designed not to be completely dependent on a single compression technique. Although inflate/deflate compression is mentioned in this document, PNG would still exist without it.
Basic PNG also does not support multiple images in one file. This restriction is a reflection of the reality that many applications do not need and will not support multiple images per file. (While the GIF standard nominally allows multiple images per file, few applications actually support it.) In any case, single images are a fundamentally different sort of object from sequences of images. Rather than make false promises of interchangeability, we have drawn a clear distinction between single-image and multi-image formats. PNG is a single-image format.
GIF is no longer suitable as a universal standard because of legal entanglements. Although just replacing GIF's compression method would avoid that problem, GIF does not support truecolor images, alpha channels, or gamma correction. The spec has more subtle problems too. Only a small subset of the GIF89 spec is actually portable across a variety of implementations, but there is no codification of the most portable part of the spec.
TIFF is far too complex to meet our goals of simplicity and interchangeability. Defining a TIFF subset would meet that objection, but would frustrate users making the reasonable assumption that a file saved as TIFF from Software XYZ would load into a program supporting our flavor of TIFF. Furthermore, TIFF is not designed for stream processing, has no provision for progressive display, and does not currently provide any good, legally unencumbered, lossless compression method.
IFF has also been suggested, but is not suitable in detail: available image representations are too machine-specific or not adequately compressed. The overall "chunk" structure of IFF is a useful concept which PNG has liberally borrowed from, but we did not attempt to be bit-for-bit compatible with IFF chunk structure. Again this is due to detailed issues, notably the fact that IFF FORMs are not designed to be serially writable.
Lossless JPEG is not suitable because it does not provide for the storage of palette-color images. Furthermore, its lossless truecolor compression is often inferior to that of PNG.
PNG expects viewers to compensate for image gamma at the time that the image is displayed. Another possible approach is to expect encoders to convert all images to a uniform gamma at encoding time. While that method would speed viewers slightly, it has fundamental flaws:
The filter algorithms are defined to operate on bytes, rather than pixels, for simplicity and speed. Tests have shown that filtering is ineffective for images with fewer than 8 bits per pixel. The filters will most often be used on 8-bit-precision data.
A final reason for not using pixel-based filtering is that if an 8-bit decoder plans to discard the low order byte of 16-bit data, it can do so before reversing the filter, boosting decoder speed.
The encoder is allowed to change filters for each new scanline. This creates no additional complexity for decoders, since a decoder is required to contain unfiltering logic for every filter type anyway. The only cost is an extra byte per scanline in the pre-compression data stream. Our tests showed that when the same filter is selected for all scanlines, this extra byte compresses away to almost nothing, so there is little storage cost compared to a fixed filter specified for the whole image. And the potential benefits of adaptive filtering are too great to ignore. Even with the simplistic filter-choice heuristics so far discovered, adaptive filtering usually outperforms fixed filters. In particular, an adaptive filter can change behavior for successive passes of an interlaced image; a fixed filter cannot.
The basic filters offered by PNG have been chosen on both theoretical and experimental grounds. In particular, it is worth noting that all the filters (except "none" and "average") operate by encoding the difference between a pixel and one of its neighboring pixels. This is usually superior to conventional linear prediction equations because the prediction is certain to be one of the possible pixel values. When the source data is not full depth (such as 5-bit data scaled up to 8-bit depth), this restriction ensures that the number of prediction delta values is no more than the number of distinct pixel values present in the source data. A linear equation can produce intermediate values not actually present in the source data, and thus reduce compression efficiency.
The ISO 8859-1 (Latin-1) character set was chosen as a compromise between functionality and portability. Some platforms cannot display anything more than 7-bit ASCII characters, while others can handle characters beyond the Latin-1 set. We felt that Latin-1 represents a widely useful and reasonably portable character set.
There is at present no provision for text employing character sets other than the ISO 8859-1 (Latin-1) character set. It is recognized that the need for other character sets will increase. However, PNG already requires that programmers implement a number of new and unfamiliar features, and text representation is not PNG's primary purpose. Since PNG provides for the creation and public registration of new ancillary chunks of general interest, it is expected that chunks for other character sets, such as Unicode, will be registered and increase gradually in popularity.
(decimal) 137 80 78 71 13 10 26 10 (hex) 89 50 4e 47 0d 0a 1a 0a (C notation) \211 P N G \r \n \032 \n
The first two bytes distinguish PNG files on systems that expect the first two bytes to identify the file type uniquely. The first byte is chosen as a non-ASCII value to reduce the probability that a text file may be misrecognized as a PNG file; also, it catches bad file transfers that clear bit 7. Bytes two through four (overlap with the first two intentional) name the format. The CR-LF sequence catches bad file transfers that alter these characters. The control-Z character stops file display under MSDOS. The final line feed checks for the inverse of the CR-LF translation problem.
Note that there is no version number in the signature, nor indeed anywhere in the file. This is intentional: the chunk mechanism provides a better, more flexible way to handle format extensions, as is described below.
Limiting chunk length to (2^31)-1 bytes avoids possible problems for implementations that cannot conveniently handle 4-byte unsigned values. In practice, chunks will usually be much shorter than that anyway.
A separate CRC is provided for each chunk in order to detect badly-transferred images as quickly as possible. In particular, critical data such as the image dimensions can be validated before being used. The chunk length is excluded in order to permit CRC calculation while data is generated (possibly before the length is known, in the case of variable-length chunks); this may avoid an extra pass over the data. Excluding the length from the CRC does not create any extra risk of failing to discover file corruption, since if the length is wrong, the CRC check will fail (the CRC will be computed on the wrong bytes and tested against the wrong value from the file besides).
A hypothetical chunk for vector graphics would be a critical chunk, since if ignored, important parts of the intended image would be missing. A chunk carrying the Mandelbrot set coordinates for a fractal image would be ancillary, since other applications could display the image without understanding what it was. In general, a chunk type should be made critical only if it is impossible to display a reasonable representation of the intended image without interpreting that chunk.
The public/private property bit ensures that any newly defined public chunk types cannot conflict with proprietary chunks that may be in use somewhere. However, it does not protect users of private chunks from the possibility that someone else may re-use the same chunk name for a different purpose. It is a good idea to put additional identifying information at the start of the data for any private chunk type.
When a PNG file is modified, certain ancillary chunks may need to be changed to reflect changes in other chunks. For example, a histogram chunk needs to be changed if the image data changes. If the encoder does not recognize histogram chunks, copying them blindly to a new output file is incorrect; such chunks should be dropped. The safe/unsafe property bit allows ancillary chunks to be marked appropriately.
Not all possible modification scenarios are covered by the safe/unsafe semantics; in particular, chunks that are dependent on the total file contents are not supported. (An example of such a chunk is an index of IDAT chunk locations within the file: adding a comment chunk would inadvertently break the index.) Definition of such chunks is discouraged. If absolutely necessary for a particular application, such chunks may be made critical chunks, with consequent loss of portability to other applications. In general, ancillary chunks may depend on critical chunks but not on other ancillary chunks. It is expected that mutually dependent information should be put into a single chunk.
In some situations it may be unavoidable to make one ancillary chunk dependent on another. Although the chunk property bits do not allow this case to be represented, a simple solution is available: in the dependent chunk, record the CRC of the chunk depended on. It can then be determined whether that chunk has been changed by some other program.
The same technique may be useful for other purposes. For example, if a program relies on the palette being in a particular order, it may store a private chunk containing the CRC of the PLTE chunk. If this value matches when the file is again read in, then it provides high confidence that the palette has not been tampered with. Note that it is not necessary to mark the private chunk unsafe-to-copy.
The same rationale holds good for palettes which are suggested quantizations of truecolor images. In this situation, it is recommended that the histogram values represent "nearest neighbor" counts, that is, the approximate usage of each palette entry if no dithering is applied. (These counts will often be available for free as a consequence of developing the suggested palette.)
The sample code provided is in the C programming language. (See also ISO 3309 and ITU-T V.42 for a formal specification.)
/* table of crc's of all 8-bit messages */ unsigned long crc_table[256]; /* Flag: has the table been computed? Initially false. */ int crc_table_computed = 0; /* make the table for a fast crc */ void make_crc_table(void) { unsigned long c; int n, k; for (n = 0; n < 256; n++) { c = (unsigned long)n; for (k = 0; k < 8; k++) c = c & 1 ? 0xedb88320L ^ (c >> 1) : c >> 1; crc_table[n] = c; } crc_table_computed = 1; } /* update a running crc with the bytes buf[0..len-1]--the crc should be initialized to all 1's, and the transmitted value is the 1's complement of the final running crc (see the crc() routine below)). */ unsigned long update_crc(unsigned long crc, unsigned char *buf, int len) { unsigned long c = crc; unsigned char *p = buf; int n = len; if (!crc_table_computed) { make_crc_table(); } if (n > 0) do { c = crc_table[(c ^ (*p++)) & 0xff] ^ (c >> 8); } while (--n); return c; } /* return the crc of the bytes buf[0..len-1] */ unsigned long crc(unsigned char *buf, int len) { if (!crc_table_computed) { make_crc_table(); } return ~update_crc(0xffffffffL, buf, len); }
The authors wish to acknowledge the contributions of the Portable Network Graphics mailing list and the readers of comp.graphics.
End of PNG Specification