Overlaying text on images is a design pattern with a long history, from protective watermarks to funny Internet memes. As social media and personalized user experiences become more important across all web platforms, dynamically combining words and imagery is a powerful but complex feature to implement, and crucial to many products.
imgix has long offered a set of single-line text parameters to address this need. The Typesetting Endpoint (
~text) provides additional controls that allow for multi-line text, leading, and letterspacing.
With controls for font size and face, color, padding, outlining, alignment, and cropping/clipping, the
txt parameters make adding a line of text to an image incredibly easy. They have limitations, however, since they don't include x/y positioning, multiple lines of text, or the ability to add a background color to improve contrast.
The Typesetting Endpoint combines all of our existing text capabilities with advanced typographic controls like leading, character tracking, multiple lines, and background colors. More importantly, it enables you to create master images from the endpoint directly, without requiring a base image for the text to be overlaid on.
New images created by the Typesetting Endpoint are treated like any other master image and can accept a number of imgix operations. For even more powerful applications, you can use them to create precise, complex layouts in real time via our
The Typesetting Endpoint can be used as a composition and layout engine for any product or feature that requires text and images together, with no requirement that the text overlay be pre-generated. See the Practical Applications section below for examples of new user experiences that could be built on top of this feature.
Note: The Typesetting Endpoint prefers Base64 encoding and should be used with the Base64 versions of the relevant parameters. See the Base64 blog post for more detail.
The Typesetting Endpoint (
~text) augments imgix's existing text parameters by adding the controls needed for full typographic control. First and foremost, it enables multi-line text layouts with automatic word wrapping to the width you specify. The height is flexible and the image will expand to fit the text, if no height is specified.
The Typesetting Endpoint also allows you to set leading (
txtlead) and tracking (
txttrack) for fine-tuning both line and character spacing. Combined with the existing
bg parameters, you can easily create popular features such as quote overlays. The generated image will expand to fit the text, width, and font size you define. The default width is 200, and the height can be constrained with the
Tracking gives you the ability to alter the spacing between all characters.
Leading controls the amount of space between lines of text.
Background Color |
Set a background color to the text area.
Set the horizontal alignment of the text. Possible values are:
To use the generated text with a base image, add the entire URL created by the Typesetting Endpoint and append it to the base image URL as the value for either
blend. The text image URL will need to be Base64-encoded first, and you will need to use the
blend64 variants of the parameters. For example, here is a simple 2-line caption that a photographer might use:
Here is the breakdown of the parameters applied to the image and the parameters applied to the text overlay, followed by the full URL for the image above.
Entire string is Base64-encoded & passed to
txt64=Taken in Barcelona, Spain. Photographer - Alexandre Perotto
(this text string is also pre-encoded into Base64)
In addition to basic text overlays and layout, the Typesetting Endpoint (in combination with
blend), makes more complex compositing possible, with precise positioning of text blocks and the ability to have multiple text overlays on a single base image. As in the above example, all
blend values are pre-encoded into Base64 to enable nesting.
txt64=Far far away,
txt64=behind the word mountains, far from the countries Vokalia and Consonantia, there live the blindtexts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean.
blend offer both pixel-precise positioning and padding for text overlays. The
by parameters will put the overlay at exactly the coordinates you specify, and are best used when the x and y aren't the same, or when you want to have the overlay right up against an edge of the image. Setting them to
0 will override the default padding value of
10 to make them flush with the edge.
bp parameters allow you to add padding around the overlay to push it away from the edge of the base image. They are applied relative to the alignment of the image, so if
markpad=20, the overlay will be 20 pixels from the bottom.
Click an image to see it in the Sandbox.
Another advantage of the Typesetting Endpoint is is the ability to put backgrounds behind the text and control the background's opacity. Adding a background is easily done by applying the
bg parameter as in the photographer's watermark example above. That example uses a transparent black 6-value hexadecimal color (
80000000), but it can also accept 3- and 6-value colors if you prefer it to be solid (the first two numbers in the 8-value color represent the percentage of opacity—
80000000 would be black at 80%).
Translucency can also be applied to the text itself, using the
blend parameters. By setting
balph to an opacity percentage, whatever text you specify as the value of the
blend parameter will have those values. The opacity will be applied to whatever
txtclr value you set for the overlay (default is
000). This can also be achieved with
mark and the
Click an image to see it in the Sandbox.
There are a lot of ways to powerfully combine text and images to create richer visual experiences and creative tools, in addition to the straightforward use of overlays. In particular, being able to programmatically pipe text into the Typesetting Endpoint allows for more dynamic personalization and control over output images intended for sharing on social media. Consider the following common use cases:
Generate an image of a specified size by piping in the width and height to both the
h parameters and the Typesetting Endpoint, with no image required and minimal CSS. https://placem.at/ is a placeholder service that uses it along with public domain images, and you can see a very simple implementation by clicking the button below.
Add user data programmatically to any image to create better user experiences.
- Watermark user-generated content to protect your users' contributions
- Create personalized images for promotions based on user profiles/preferences (great for applications where CSS is problematic, like email)
Create a tool that allows users to easily make shareable quotes by choosing an image, entering text, and applying overlay effects like the ones listed in the Best Practices section below.
Add dynamically-generated text like hashtags, pull quotes, captions, or URLs to content that you share on social media, without opening Photoshop.
Making type look good over images can be a challenge, depending on the color range and visual complexity of the base image. Here are some tips and examples that will help guide your design decisions when using the Typesetting Endpoint for overlays.
Click an image to see it in the Sandbox
The easiest way to ensure that the text is readable is to choose a base image that isn't busy, or has a large enough area of solid color to contrast with the text. This works better for one-off situations than larger numbers of images, however, since the location of the text would need to change from image to image.
If the base image is particularly busy, you can create a calm area behind the text by adding a background and padding to your text. For a basic background, try
bg=000&markpad=20. You can adjust the color to match the image as desired.
This method obscures the photo a bit, but can help the text pop more if the image detail isn't as important. You can use the
blend parameter to apply an overlay of any color by supplying the hex value of the color as the value.
blend supports transparency either by using an 8-color hexadecimal value, or by using the
balph parameter in addition to
Another way to reduce the visual noise is to blur the base image to reduce the amount of detail. You can use the
blur parameter to do this easily; try setting
blur=100 as a starting point and adjust from there.
The Typesetting Endpoint is a powerful tool, and the techniques listed here are great places to start when trying it out and considering what you might want to build. However, they are just the beginning of what can be done. We encourage you to experiment with it—the links to the Sandbox from this post, the sample code below, and the documentation examples are great ways to try out different options and combinations of effects.