4.4. Demonstration: Use Case of the Dataset

While LTTR/SET presents a regularised type designers’ dataset with promising theoretical foundations, its practical value demands validation. The sequence regularisation techniques employed suggest tantalising possibilities, yet like any ambitious young prodigy, these claims require rigorous examination before joining the established canon.

To scrutinise these assertions, particularly regarding dataset regularisation benefits, a thorough empirical evaluation becomes essential. Enter DeepVecFont-2 (Wang et al. 2023) – selected to serve as both jury and inspector in this trial of algorithmic merit. This established system, when trained on the regularised dataset, should provide an impartial assessment of whether LTTR/SET’s methodological innovations truly advance the field or merely add to its theoretical foundations.

DeepVecFont-2 presents an intriguing solution by exploiting advances in the computer vision domain and its image generation techniques known from products like StableDiffusion (“Stable Diffusion [2022] 2023) or Dalle (DALL·E: Creating Images from Text 2021) to guide vector drawings generation techniques that are similar to language model systems like ChatGPT (ChatGPT,” n.d.; Eloundou et al. 2023). One can think of this method as seeing a visual representation while actually drawing bézier curves. Note: We will delve deeper into encoding techniques in another article. Another argument for the DeepVecFont-2 (Wang et al. 2023) is its native capability of completing font from a few initial glyphs. This application is called font completion or few-shot font generation (Y. Park et al. 2018; Yuan et al. 2021; S. Park [2021] 2022). To study the font completion and other system of this application more, please refer to the previous article (“Use of Generative AI in Type Design: The State-of-the-Art Survey,” n.d.) DeepVecFont-2 paper itself (Wang et al. 2023) claims promissing results Lastly, the availability of system resources like code, satisfactory documentation, and MIT licensing must be highlighted (“The MIT License,” n.d.). This makes it a generous gift for all the pioneers in the type design domain. The downside of the system remains its unreadiness to encode type designers’ drawings, same as any other found system in this thesis. Therefore, the benefits of this valuable dataset feature remain unvalidated for now.

ChatGPT.” n.d. Accessed October 18, 2024. https://openai.com/chatgpt/overview/.
DALL·E: Creating Images from Text.” 2021. OpenAI. January 5, 2021. https://openai.com/blog/dall-e/.
Eloundou, Tyna, Sam Manning, Pamela Mishkin, and Daniel Rock. 2023. GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.” August 21, 2023. https://doi.org/10.48550/arXiv.2303.10130.
Park, Song. (2021) 2022. FFG-benchmarks.” Clova AI Research. https://github.com/clovaai/fewshot-font-generation.
Park, Yonggyu, Junhyun Lee, Yookyung Koh, Inyeop Lee, Jinhyuk Lee, and Jaewoo Kang. 2018. “Typeface Completion with Generative Adversarial Networks.” December 13, 2018. https://doi.org/10.48550/arXiv.1811.03762.
“Stable Diffusion.” (2022) 2023. CompVis - Computer Vision; Learning LMU Munich. https://github.com/CompVis/stable-diffusion.
“The MIT License.” n.d. Open Source Initiative. Accessed October 28, 2024. https://opensource.org/license/MIT.
“Use of Generative AI in Type Design: The State-of-the-Art Survey.” n.d. Accessed October 18, 2024. https://www.lttrink.com/blog/use-of-generative-ai-in-type-design-the-state-of-the-art-survey.
Wang, Yuqing, Yizhi Wang, Longhui Yu, Yuesheng Zhu, and Zhouhui Lian. 2023. DeepVecFont-v2: Exploiting Transformers to Synthesize Vector Fonts with Higher Quality.” March 25, 2023. https://doi.org/10.48550/arXiv.2303.14585.
Yuan, Ye, Wuyang Chen, Zhaowen Wang, Matthew Fisher, Zhifei Zhang, Zhangyang Wang, and Hailin Jin. 2021. “Font Completion and Manipulation by Cycling Between Multi-Modality Representations.” August 29, 2021. https://doi.org/10.48550/arXiv.2108.12965.

Citation

If this work is useful for your research, please cite it as:

@phdthesis{paldia2025generative,
  title={Research and development of generative neural networks for type design},
  author={Paldia, Filip},
  year={2025},
  school={Academy of Fine Arts and Design in Bratislava},
  address={Bratislava, Slovakia},
  type={Doctoral thesis},
  url={https://lttrface.com/doctoral-thesis/},
  note={Department of Visual Communication, Studio Typo}
}