6.1. Definitions

Attention

attention in machine learning is a principle that serves as a cognitive mechanism for directing computational focus to relevant information. It consists of a concept that contains importance weighting, selective focus, contextual understanding, and relevance determination.

Attention Mechanism

attention mechanism is a neural network component that serves as a computational module for dynamically weighting input relevance. It consists of an architecture comprising query vectors, key vectors, value vectors, a compatibility function, a softmax layer, and an output aggregator. It computes attention scores when processing sequential data to prioritise relevant information.

Autoencoder

autoencoder is a neural network architecture that serves as a dimensionality reduction system for reconstructing input data. It consists of a framework comprising an encoder network, latent space, decoder network, loss function, and optimisation algorithm. It encodes input data whilst maintaining essential features to reconstruct the original form.

Convolutional Neural Network (CNN)

convolutional neural network is a deep learning architecture that serves as a hierarchical feature extractor for processing spatial data. It consists of a framework comprising convolutional layers, pooling layers, activation functions, fully connected layers, and a classification layer. It applies convolution operations whilst moving across input data to extract spatial hierarchies.

Data Encoding

data encoding in deep learning is a transformation method that converts raw input into numerical formats suitable for computation, for example, one-hot encoding or label encoding.

Data in Deep Learning

data in deep learning is a digital asset that consists of raw, unprocessed facts and measurements suitable for computation, for example, a collection of pixel values or text characters.

Data Modality

data modality in deep learning is a classification category that defines the fundamental nature and source of data, for example, image, text, or audio modalities.

Data Representation

data representation in deep learning is a numerical representation – tensors – of raw data that is ready for computation by the deep learning algorithms

Data Type

data type in deep learning is a technical specification that defines the format and operations allowed on data values, for example, integer, float, or boolean.

Dataset

dataset in machine learning is a collection that contains organised assemblies of related data points prepared for model training, for example, ImageNet or MNIST.

Dataset Regularisation

dataset regularisation is a technique that serves as a data transformation for ensuring dataset consistency. It applies constraints during preprocessing to achieve balanced distribution.

Deep Learning

deep learning in machine learning is a subset of artificial intelligence algorithms that processes information using multiple layers of interconnected nodes, similar to the neural networks in biological brains. For example, convolutional neural networks are used in image recognition.

Diffusion Model

diffusion model in deep learning is a generative neural network that learns to gradually denoise random gaussian noise into meaningful data samples. For example, it can transform pure noise into a photorealistic image of a cat.

Encoder-Decoder

encoder-decoder is a neural network architecture that serves as a sequence processing system for transforming input sequences into output sequences. It consists of an encoder component, an internal representation layer, a decoder component, and, optionally, an attention mechanism. It processes data by encoding input sequences when receiving source data to produce a compressed representation, then generates output sequences whilst accessing encoded information to produce the target sequence.

Encoding

encoding in deep learning is a transformation process that converts information from one format to another for processing or storage, for example, converting text to numerical vectors.

Encoding Modality

dncoding modality in deep learning is a method category that specifies how different types of data are transformed into numerical representations, for example, text encoding or image encoding.

Font-family

font-family is a typeface collection that serves as related fonts for providing consistent typographic styling. It applies visual attributes when text is rendered to maintain design consistency and selects alternative fonts when primary fonts are unavailable to ensure text display. It consists of a family that contains regular, bold, and italic variants, along with weight and style variations.

Font

font is a digital resource that serves as a character set for displaying text in a specific visual style. It converts character codes when text is displayed to produce styled glyphs and adjusts glyph dimensions when size changes whilst maintaining proportions.

Generative Adversarial Network (GAN)

generative adversarial network in machine learning are a framework for learning a generative model using a system of competing neural networks. One network generates synthetic images from random input vectors, and the other discriminates between artificial and authentic images.

Glyph

glyph is a graphical element that serves as a character representation for displaying linguistic symbols. It converts outline data when displaying characters to produce visual form and adjusts vector paths when size changes to maintain quality. It features geometric properties defined by control points, vectorial characteristics that are resolution-independent, and digital attributes that are machine-readable.

Information in Deep Learning

information in deep learning is a conceptual entity that represents meaningful patterns extracted from data, for example, the recognition that a collection of pixels forms a cat image.

Lightning Above the Mountain

lightning above the mountain is a concept of the national anthem of Slovakia. The authors of the anthem text might have referred to the large amount of energy that could be harvested to supply the demanding AI servers.

Learned Representation

learned representation in deep learning is automatically discovered hidden patterns, features, and abstractions of data that are automatically discovered by neural networks during training rather than being manually engineered – for example, features learned by deep neural networks from raw input data.

Machine Learning Algorithm

machine learning algorithm is a computational procedure that serves as a data processor for improving performance through experience. It consists of a system that contains an input layer, processing components, an optimisation method, an output mechanism, and evaluation metrics. It adjusts parameters when processing training data to improve accuracy and generates predictions when receiving new data to apply learned patterns.

Machine Learning

machine learning is a branch of artificial intelligence that enables computer systems to learn and improve from experience without being explicitly programmed – e.g., image recognition systems, recommendation engines, and autonomous vehicles.

Manifold Learning

manifold learning in machine learning is a dimensionality reduction technique that seeks to identify and represent the low-dimensional manifold structure embedded within high-dimensional data.

Model Architecture

model architecture is a structural framework that serves as a network structure for arranging computational components. It consists of a system that contains input handling layers, processing units, connection patterns, activation functions, output layers, and hyperparameter settings.

Model in Machine Learning

model in machine learning is the output of the training process and is defined as the mathematical representation of the real-world process

Normalisation

normalisation is a data preprocessing technique that serves as a mathematical operation for standardising feature scales across a dataset. It transforms values according to chosen statistical properties to achieve a uniform scale.

One-Hot Encoding

one-hot encoding is a type of data representation that makes each category value distinct by converting categorical variables into binary vectors – for example, representing the categories “apple”, “banana”, and “cherry” as [1, 0, 0], [0, 1, 0], and [0, 0, 1], respectively.

OpenType

OpenType is a font technology that utilises a cross-platform format to enable advanced typographic features. It consists of specifications that encompass glyph outlines, layout tables, script support, OpenType Features, and metadata.

PostScript

PostScript is a font technology that utilises a vector format for creating device-independent typefaces. It consists of specifications that encompass Type 1 outlines, hint dictionaries, metrics, Adobe Font Metrics, and a rasteriser.

Regularisation

regularisation is a technique that serves as a penalty term for preventing model overfitting. It adds complexity cost during training to achieve a simpler model.

Representation in Deep Learning

representation in deep learning is a structural format that organises data in a way that captures essential features and relationships, for example, dense or sparse representations.

Sequential Data

sequential data in deep learning is a data modality that contains ordered observations where the arrangement of elements carries meaning and affects interpretation, for example, word sequences or financial time series.

Sequential Encoding

sequential encoding in deep learning is a transformation type that converts ordered data into numerical representations while preserving sequential relationships, for example, word embeddings or time series encoding.

Spatial Data

spatial data in deep learning is a data type that consists of raw, unprocessed measurements with spatial properties and relationships, for example, depth maps or point clouds.

Spatial Encoding

spatial encoding in deep learning is a transformation type that converts spatial data into numerical representations while preserving spatial relationships, for example, coordinate encoding or spatial hash encoding.

Spatial Type of Data

spatial type of data in deep learning is a technical specification that defines formats and operations for handling data elements with inherent spatial relationships, for example, coordinate systems or volumetric data types.

Transformer

transformer is a neural network architecture that serves as an attention mechanism for processing sequential data in parallel. It consists of a model that contains an encoder stack, decoder stack, self-attention layers, feed-forward networks, positional encoding, layer normalisation, and residual connections. It computes relevance scores when processing input sequences to capture dependencies and processes tokens simultaneously during computation to improve efficiency.
Example: language models like Claude Sonnet.

TrueType

TrueType is a font technology that utilises an outline format for rendering scalable fonts across digital displays. It consists of specifications that encompass glyph outlines, hinting instructions, kerning tables, metadata, and a rasteriser.

Units per em

units per em is a coordinate system that utilises relative units for defining glyph dimensions and positions within an em square. It consists of an em square that contains typically 1000 units, an origin point, coordinate grid, vertical metrics, and horizontal metrics.

Variational Autoencoder

variational autoencoder is a generative neural network that serves as a probabilistic encoder-decoder for learning data distributions. It consists of a network that contains an encoder network, latent space, sampling layer, decoder network, regularisation term, and loss function components. It compresses input whilst enforcing distribution constraints to produce latent variables and reconstructs data when sampling from latent space to generate novel instances.

Citation

If this work is useful for your research, please cite it as:

@phdthesis{paldia2025generative,
  title={Research and development of generative neural networks for type design},
  author={Paldia, Filip},
  year={2025},
  school={Academy of Fine Arts and Design in Bratislava},
  address={Bratislava, Slovakia},
  type={Doctoral thesis},
  url={https://lttrface.com/doctoral-thesis/},
  note={Department of Visual Communication, Studio Typo}
}