<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/resources/style/atom.xsl"?>
<feed
  xmlns="http://www.w3.org/2005/Atom"
  xmlns:media="http://search.yahoo.com/mrss/"
  xml:lang="en">

  <id>https://connor.zip/atom</id>
  <title>connor.zip</title>
  <subtitle>A software engineer's scratchpad.</subtitle>
  <link href="https://connor.zip/atom" rel="self" type="application/rss+xml" />
  <link href="https://connor.zip" />
  <icon>https://connor.zip/resources/images/turtle.png</icon>
  <updated>2025-10-15T00:00:00-05:00</updated>

  
  <entry>
    <id>https://connor.zip/posts/2025-10-15-rose-rosette-disease</id>
    <title>Rose Rosette Disease</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2025-10-15-rose-rosette-disease" />
    <published>2025-10-15T00:00:00-05:00</published>
    <summary>Rose Rosette Disease affecting roses in Little Rock</summary>
    
    <media:content url="https://connor.zip/resources/images/2025-10-15-rose-rosette-disease/pink-and-yellow-roses.jpg" medium="image" width="600" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Over the last few years, we've greatly improved the landscaping of our home in downtown Little Rock. This year, we decided to dilineate the beds from the yard to reduce difficulty trimming and handling weeds. Instead of using a metal or plastic border, we simply used a hose pipe to trace out a snaking border line for several beds, and dug a trench using a square transfer shovel. The trench is about four inches deep, and angled on one side, which allows for the mulch to fall into the trench and cuts the roots of the grass. We lined the beds with builder's paper or newspaper to stop the weeds, then shoveled on a few inches of mulch. In all we made around half a dozen trips to the nursury to buy black dyed cedar mulch, each time filling the bed of my '96 Toyota Tacoma with about a cubic yard of material.</p>
<p>The beds turned out beautifully, but the paper did not stop the <a href="https://www.canr.msu.edu/news/pain-in-the-grass-bermudagrass">bermuda grass</a> from tunneling up to the surface. Next year, I'll use a sturdier biodegradeable barrier like cardboard. This season I'll try <a href="https://content.ces.ncsu.edu/segment-sethoxydim">sethoxydim</a> such as in Fertilome Over The Top II.</p>
<figure>
<img src="/resources/images/2025-10-15-rose-rosette-disease/pink-and-yellow-roses.jpg" alt="A blooming pink and yellow rose" />
<figcaption>A blooming pink and yellow rose</figcaption>
</figure>
<p>The pride of our front garden this fall was our collection of rose bushes on each side of the front sidewalk, which exploded with vibrant blooms of all colors as the weather cooled. We received many compliments from our neighbors on them, and each morning I would step out with my coffee and just stand around them, taking in the colors. But alas, all things come to an end.</p>
<h2 id="a-death-sentence-for-roses">A Death Sentence for Roses</h2>
<p>In researching garden plants for my zone, I learned about Rose Rosette Disease. This is a virus which infects roses, causing shoots, mottled leaves, reddish discoloration, excessive thorniness, and a tell-tale &quot;witches' broom&quot; formation where many branches arise out of one point. It is spread from rose to rose by tiny eriophyid mites which ride on the wind or blown by leaf-blowers. Unfortunately, some of my roses were already affected by the virus. By the time symptoms appear, it's likely the entire plant is already infected with the virus, which travels through its vascular system even into the roots -- plants can be asymptomatic for up to six months after infection. There is no cure for the virus, once infected the only course of action is to dig up the entire plant, bag it, and dispose of it in the garbage where it won't infect another rose. Our local nursury Good Earth has a short article on <a href="https://thegoodearthgarden.com/rose-rosette-virus-identification-and-control/">RRD: Identification, Symptoms, and Treatment</a>, including alternative plants.</p>
<p>Here are a few resources on the disease:</p>
<ul>
<li><a href="https://www.canr.msu.edu/news/rose_gardeners_should_learn_the_symptoms_of_rose_rosette_virus">Rose Rosette Disease: A Death Sentence for Roses - Michigan State University</a></li>
<li><a href="https://extensiongardener.ces.ncsu.edu/2024/03/rose-rosette-virus-flower-killer/">Rose Rosette Flower Killer - North Carolina Extension Service</a></li>
<li><a href="https://www.gardencentermag.com/article/garden0515-rose-rosette-disease-control/">A Plague of Roses - Garden Center Magazine</a></li>
<li><a href="https://utia.tennessee.edu/publications/wp-content/uploads/sites/269/2024/12/W1284.pdf">Rose Rosette Disease, a Quick Overview - University of Tennessee Institute of Agriculture</a></li>
</ul>
<p>My approach has been to prune infected canes to the ground, and spray all my roses every few days with a 2oz/gallon <a href="https://bonide.com/product/neem-oil-conc/">neem oil</a> solution to control the mites, and vigilantly inspect new growth for symptoms. There is <a href="https://garden.org/ideas/view/gemini_sage/2291/My-Experience-with-Rose-Rosette-Disease/">anecdotal evidence</a> that aggresive pruning of infected canes may prevent the entire rose from becoming infected if done in time. We already dead-head regularly, but have begun disenfecting our clippers with bleach between bushes to prevent transfer of mites.</p>
<p>In a recent walk around the neighborhood, I recognized a staggering rate of infection. Out of seven rose bushes on my block, each one exhibits symptoms. Some have been exhibiting symptoms for some time, while others are just starting to show symptoms. Mites from these roses are likely what infected mine, it only takes an hour for a mite to transfer the virus, after being blown by the wind or leaf blowers.</p>
<figure>
<img src="/resources/images/2025-10-15-rose-rosette-disease/witches-broom.jpg" alt="A witches' broom formation" />
<figcaption>A witches' broom formation</figcaption>
</figure>
<p>I spoke with <a href="https://personnel.uada.edu/detail/4032484/">Derek Reed</a> at the Pulaski County Master Gardeners Association, and he shared two fact sheets, which I'll provide here:</p>
<ul>
<li><a href="/resources/pdfs/rrd-fsa-7579.pdf">UA Division of Agriculture: Rose Rosette Disease</a></li>
<li><a href="/resources/pdfs/rrd-epp-7329.pdf">Oklahoma Cooperative Extension Service: Rose Rosette Disease</a></li>
</ul>
<p>Derek shared with me:</p>
<blockquote>
<p>Rose Rosette disease is very common in our area unfortunately. A large part of this issue is due to lack of awareness. Rose bushes get it and then it is spread by mites. Part of the issue is an infected plant may not show symptoms. Once infected it is best to remove the entire plant. Unfortunately, since it is a virus it infects the entire system and is not localized. So while you [can remove] the symptoms by pruning the plant is still infected. Any mites that are living on that bush can still transmit it to other plants.</p>
</blockquote>
<h2 id="treatment">Treatment</h2>
<p>The Montana State University suggests<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>:</p>
<blockquote>
<p>To control the mites chemically, a dormant oil can be applied prior to bud break. Neem oil, insecticidal soap, a miticide, sulfur and those insecticides with the active ingredients bifenthrin, deltamethrin, and permethrin can be applied as contact insecticides in the spring following bud break. Systemic insecticides with the active ingredients imidacloprid and dinotefuran can also be used against the pest and should be applied in the spring.</p>
</blockquote>
<h2 id="resistance">Resistance</h2>
<p>Research<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> has shown that rose varieties native to North America are resistant to the RRD virus, and some varieties related to these or a resistant Asian variety, R. Rugosa, show resistance as well. These native roses generally have a flatter flower reminiscent of a blackberry, however several hybrids are beautiful garden roses such as <a href="https://heirloomroses.com/products/john-davis">John Davis Hardy Rose</a>, <a href="https://heirloomroses.com/products/therese-bugnet">Thérèse Bugnet Rugosa Rose</a>, <a href="https://roguevalleyroses.com/rose/moores-striped-rugosa/">Moore’s Striped Rugosa</a>, <a href="https://www.davidaustinroses.com/products/bonica">Bonica</a>, <a href="https://heirloomroses.com/products/morden-blush">Morden Blush</a>, and <a href="https://www.gertens.com/winnipeg-parks-rose">Winnipeg Parks</a>.</p>
<p>On his blog, Dr. Roush <a href="https://kansasgardenmusings.blogspot.com/2022/11/november-notes.html#:~:text=I've%20got%20to%20spend%20some%20of%20today%20preparing%20for%20a%20Johnson%20County%20Master%20Gardener%20presentation%20about%20Rugosa%20and%20Old%20Garden%20Roses.%C2%A0%20Since%20they're%20all%20that%20Rose%20Rosette%20disease%20has%20left%20me%2C%20you%20can%20bet%20that%20I'm%20going%20to%20touch%20on%20that%20hell%2Dborne%20scourge%20as%20well.">shares his experience</a> that Rugosa and Old Garden Roses resist RRD. See the tag <a href="https://kansasgardenmusings.blogspot.com/search/label/Rosa%20rugosa">Rosa Rugosa</a> for many beautiful varieties of R. Rugosa hybrid. His articles on <a href="https://kansasgardenmusings.blogspot.com/2011/05/sir-thomas-lipton.html">Sir Thomas Lipton</a> and <a href="https://kansasgardenmusings.blogspot.com/2013/05/trailer-trash-therese.html">Therese Bugnet</a> are informative.</p>
<figure>
<img src="/resources/images/2025-10-15-rose-rosette-disease/morden-blush.jpg" alt="Morden Blush (original)" />
<figcaption>Morden Blush (<a href="https://www.arboretum.purdue.edu/explorer/plants/607">original</a>)</figcaption>
</figure>
<p>In researching this, I discovered Ralph Moore created the resistant Fuzzy Wuzzy Red and Moore's Striped Rugosa, is creator of the entire category of moss roses (see <a href="https://pacifichorticulture.org/articles/ralph-moore-father-of-the-miniature-rose/">Ralph Moore: Father of the Miniature Rose</a>), and donated funds and rose germplasm to the Rose Breeding and Genetics Program at Texas A&amp;M University. He also created the <a href="https://www.antiqueroseemporium.com/products/mermaid">Mermaid</a> rose, a more manageable climbing rose related to the invasive <a href="https://texasinvasives.org/plant_database/detail.php?symbol=ROBR">McCartney Rose (R. bracteata)</a>, this enabled him to produce more beautiful crosses such as the <a href="https://www.helpmefind.com/gardening/pl.php?n=4848">Pink Powderpuff</a> as detailed in <a href="https://www.helpmefind.com/gardening/ezine.php?publicationID=544">Rosa Bracteata: a Vicious, Feral Beauty, Tamed at Last!</a> -- unfortunately there is not yet scientific study of these roses' resistance.</p>
<figure>
<img src="/resources/images/2025-10-15-rose-rosette-disease/rosa-arkansana.jpg" alt="R. arkansana (original)" />
<figcaption>R. arkansana (<a href="https://www.wildflower.org/gallery/result.php?id_image=24294">original</a>)</figcaption>
</figure>
<p>Below are a selection of resistant roses:</p>
<table>
<thead>
<tr>
<th>Variety</th>
<th>Susceptibility</th>
<th>Kind</th>
<th>Zones</th>
<th>Height</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://www.forestfarm.com/rosa-carolina-roca127.html">R. carolina FF</a></td>
<td>None</td>
<td>Pasture</td>
<td>5+</td>
<td>3-6'</td>
</tr>
<tr>
<td>R. bracteata RM</td>
<td>None</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/rose/l.php?l=2.38321.0">Fuzzy Wuzzy Red</a></td>
<td>None</td>
<td>Moss</td>
<td>5b-10b</td>
<td>1'</td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/rose/l.php?l=2.21611">Purple Pavement</a></td>
<td>None</td>
<td>Hybrid Rugosa</td>
<td>3b+</td>
<td>5'</td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/gardening/l.php?l=2.2125">Morden Blush</a></td>
<td>None</td>
<td>Shrub</td>
<td>3a+</td>
<td>4'</td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/gardening/l.php?l=2.67755.1">Chuckles</a></td>
<td>None</td>
<td>Floribunda</td>
<td></td>
<td>5'</td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/rose/l.php?l=2.5791">Sir Thomas Lipton</a></td>
<td>None</td>
<td>Hybrid Rugosa, Shrub</td>
<td>4b+</td>
<td>5-8'</td>
</tr>
<tr>
<td><a href="https://www.forestfarm.com/rosa-virginiana-rovi202.html">R. virginiana FF</a></td>
<td>None</td>
<td></td>
<td></td>
<td>3-6'</td>
</tr>
<tr>
<td>R. folialosa ARE</td>
<td>None</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="https://roguevalleyroses.com/rose/r-woodsii/">R. woodsii RVR</a></td>
<td>None</td>
<td></td>
<td></td>
<td>4-5'</td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/gardening/l.php?l=2.23672">Fairy Moss</a></td>
<td>Low</td>
<td>Miniature, Polyantha</td>
<td>5b+</td>
<td>2'</td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/gardening/l.php?l=2.5954.1">Star Delight</a></td>
<td>Low</td>
<td>Hybrid Rugosa</td>
<td></td>
<td>2'</td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/rose/l.php?l=2.1934">John Davis</a></td>
<td>Low</td>
<td>Hybrid Kordesii</td>
<td>2b+</td>
<td>7'</td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/gardening/l.php?l=2.37560">Moore’s Striped Rugosa</a></td>
<td>Low</td>
<td>Hybrid Rugosa</td>
<td>5b-10b</td>
<td>4-5'</td>
</tr>
<tr>
<td><a href="https://roses.tamu.edu/basyes-blueberry/">Basye’s Blueberry</a></td>
<td>Low</td>
<td>Shrub</td>
<td>5a-9b</td>
<td>5-8'</td>
</tr>
<tr>
<td><a href="https://www.helpmefind.com/rose/pl.php?n=6596">Winnipeg Parks</a></td>
<td>Low but Positive</td>
<td>Shrub</td>
<td>2b+</td>
<td>3'</td>
</tr>
</tbody>
</table>
<p>Native roses include <a href="https://www.antiqueroseemporium.com/products/r-wichuraiana-thornless">R. wichuraiana thornless</a>, the <a href="https://www.wildflower.org/plants/result.php?id_plant=roar3">Prarie Rose (R. arkansana)</a>, the <a href="https://plants.ces.ncsu.edu/plants/rosa-carolina/">Carolina Rose (R. carolina)</a>, the <a href="https://www.wildflower.org/plants/result.php?id_plant=ROFO">White Prarie Rose (R. foliolosa)</a>, the <a href="https://plants.ces.ncsu.edu/plants/rosa-virginiana/">Virginia Rose (R. virginiana)</a>, and <a href="https://www.wildflower.org/plants/result.php?id_plant=rowo">Woods' rose (R. woodsii)</a>.</p>
<p>Some resistance roses include: <a href="https://heirloomroses.com/products/top-gun">Top Gun</a>, <a href="https://www.antiqueroseemporium.com/products/basyes-blueberry">Basye’s Blueberry</a>, <a href="https://heirloomroses.com/products/john-davis">John Davis Hardy Rose</a>, <a href="https://heirloomroses.com/products/purple-pavement">Purple Pavement Rugosa Rose</a>, <a href="https://www.antiqueroseemporium.com/products/sir-thomas-lipton">Sir Thomas Lipton</a>, <a href="https://heirloomroses.com/products/therese-bugnet">Thérèse Bugnet Rugosa Rose</a>, <a href="https://roguevalleyroses.com/rose/moores-striped-rugosa/">Moore’s Striped Rugosa</a>, <a href="https://heirloomroses.com/products/chuckles?srsltid=AfmBOooz9TocDsnUNyVBZAfBXJl59yA0ccIO-KvZXf3_vMHJckLbs6GU">Chuckles Hardy Rose</a>, <a href="https://roguevalleyroses.com/rose/fairy-moss/">Fairy Moss</a>, <a href="https://heirloomroses.com/products/stormy-weather?srsltid=AfmBOoqrbKI42ObRmC1KKysVvWGYNmb0-TRUQ0exbV1L6uoc-mNJD4OD">Stormy Weather</a>, <a href="https://www.davidaustinroses.com/products/bonica">Bonica</a> (resistant to the mite, not the pathogen), <a href="https://heirloomroses.com/products/morden-blush">Morden Blush</a>, <a href="https://www.highcountryroses.com/shop/modern-roses/hardy-canadian-roses/winnipeg-parks/">Winnipeg Parks</a> developed from native prarie roses.</p>
<p>Those which were symptomatic in the trial but no RRV was detected include: <a href="https://www.antiqueroseemporium.com/products/caldwell-pink">Caldwell Pink</a>, <a href="https://www.antiqueroseemporium.com/products/lafter">Lafter</a>, <a href="https://roguevalleyroses.com/hybridizer/manetti/">Manetti</a>, <a href="https://www.highcountryroses.com/shop/modern-roses/hardy-canadian-roses/morden-fireglow/">Morden Fireglow</a>.</p>
<h2 id="awareness">Awareness</h2>
<p>To help improve awareness, I've created a <a href="https://www.google.com/maps/d/edit?mid=18kRTdOIk1l1nmVHi34x26Mf8N50-DJs&amp;usp=sharing">Symptomatic Rose Map</a>, and will be sharing an information packet including the above fact sheets with rose growers in my neighborhood. I am not an expert, not all diagnoses will be accurate.</p>
<iframe src="https://www.google.com/maps/d/embed?mid=18kRTdOIk1l1nmVHi34x26Mf8N50-DJs&amp;ehbc=2E312F" width="100%" height="480" />
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p><a href="https://www.montana.edu/extension/Full_HTML_Pubs/a-guide-to-pests-problems-and-identification-of-ornamental-shrubs-and-trees-in-montana/insects/blister-mites.html">Blister Mites - Montana State University Extension</a>&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Results are taken from the study <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10052971/">Field Resistance to Rose Rosette Disease as Determined by Multi-Year Evaluations in Tennessee and Delaware</a> states the following:</p>
<blockquote>
<p>Among those with no or low symptom development are nine species accessions. Accessions of five North American species (R. arkansana, R. carolina, R. folialosa, R. virginiana, and R. woodsii) and one Asian species (R. rugosa) belong to two closely related sections, Carolinae and Cinnamomeae. Previous work has reported that accessions of several North American species from the subgenus Carolinae (R. arkansana, R. blanda, R. carolina, R. californica, and R. palustris) did not show symptom development when they were grafted with infected buds, indicating that there are strong sources of resistance to RRD among this group of rose species native to North America.</p>
</blockquote>
<blockquote>
<p>Another large group of accessions with low symptom development is the hybrids with R. rugosa (‘John Davis’, ‘Fuzzy Wuzzy Red’, Moore’s Striped Rugosa (‘MORbeauty’), Purple Pavement (‘HANpur’), ‘Sir Thomas Lipton’, Star Delight (‘MORstar90’), and ‘Therese Bugnet’).</p>
</blockquote>
<blockquote>
<p>The final two accessions that developed no or low symptoms of RRD over the 3-year trial were the floribunda rose Chuckles (‘SIMmimi’) and the miniature rose Fairy Moss (MORfairpol’)</p>
</blockquote>
<blockquote>
<p>Based on current data, the following rose cultivars have shown no symptoms or detectable virus: ‘R. arkansana FF,’ ‘R. bracteata RM,’ ‘Fuzzy Wuzzy Red,’ ‘Purple Pavement,’ ‘Morden Blush,’ ‘Chuckles,’ ‘Sir Thomas Lipton’ and selections of ‘R. virginiana FF,’ ‘R. folialosa ARE,’ ‘R. carolina FF’ and ‘R. woodsii RVR.’</p>
</blockquote>
<blockquote>
<p>These species were incorporated by crosses with a local accession of R. arkansana as well as through his use of the roses ‘Prairie Princess’ and ‘Assinboine’. ‘Morden Blush’ and ‘Winnipeg Parks’ showed few symptoms, whereas ‘Morden Centennial’ and ‘Morden Fireglow’ showed moderate symptom development.</p>
</blockquote>
<blockquote>
<p>Those cultivars showing moderate RRD symptoms without a positive RRV diagnosis were ‘Caldwell Pink’, ‘Lafter’, Manetti, ‘Morden Fireglow’, and Sorcerer (‘SAVasorc’). These roses generally developed rosettes of an RRV infection but not until late in the trial.</p>
</blockquote>
<p>In each of the native varieties, R. abbreviates &quot;rosa,&quot; part of the scientific name, and the letters which follow the name indicate the source of each specimen:</p>
<table>
<thead>
<tr>
<th>Code</th>
<th>Source</th>
</tr>
</thead>
<tbody>
<tr>
<td>ARE</td>
<td><a href="https://www.antiqueroseemporium.com">Antique Rose Emporium</a></td>
</tr>
<tr>
<td>Bailey</td>
<td><a href="https://www.baileynurseries.com">Baileys Nursery</a></td>
</tr>
<tr>
<td>FF</td>
<td><a href="https://www.forestfarm.com">Forest Farm</a></td>
</tr>
<tr>
<td>FPS</td>
<td><a href="https://fps.ucdavis.edu/roses.cfm">Foundation Plant Services</a></td>
</tr>
<tr>
<td>RM</td>
<td><a href="https://roses.tamu.edu/about/ralph-moore/">Ralph Moore</a></td>
</tr>
<tr>
<td>RVR</td>
<td><a href="https://roguevalleyroses.com">Rouge Valley Roses</a></td>
</tr>
</tbody>
</table>
&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2025-05-03-rune</id>
    <title>Rune</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2025-05-03-rune" />
    <published>2025-05-03T00:00:00-05:00</published>
    <summary>How "rune" came to mean a Unicode code point</summary>
    
    <media:content url="https://connor.zip/resources/images/2025-05-03-rune/rune.webp" medium="image" width="600" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>I'm taking a break from social media and focusing more on consuming written word, through books and web feeds via <a href="https://netnewswire.com">NewNewsWire</a>. One of those feeds is <a href="https://scour.ing/@cpt">Scour</a>, an aggregation of my imported feeds automatically filtered based on interests -- especially useful for high-volume feeds like HN. This week my feed contained Adam Pritchard's article on <a href="https://adam-p.ca/blog/2025/04/string-length/">limiting string length</a> which contained this note:</p>
<blockquote>
<p>Note that in Go, a Unicode code point is typically called a “rune”. (Go seems to have introduced the term for the sake of brevity. I certainly appreciate that, but I’m going to stick with universal terms here.)</p>
</blockquote>
<figure>
<img src="/resources/images/2025-05-03-rune/rune.webp" alt="An Earth Rune from the 2007-era version of the online role-playing game Runescape" />
<figcaption>An Earth Rune from the 2007-era version of the online role-playing game Runescape</figcaption>
</figure>
<p>When was the term &quot;rune&quot; introduced, and why? I thought I had seen it outside of Go, and did some digging.</p>
<p>As explained in <a href="https://go.dev/blog/strings">Strings, bytes, runes, and characters in Go</a>,</p>
<blockquote>
<p>“Code point” is a bit of a mouthful, so Go introduces a shorter term for the concept: rune. The term appears in the libraries and source code, and means exactly the same as “code point”, with one interesting addition.</p>
</blockquote>
<p>The Go programming language was created by a group including Rob Pike, Ken Thompson, and <a href="https://research.swtch.com">Russ Cox</a>; all Bell Labs alumni who had collaborated on the <a href="https://9p.io/plan9/about.html">Plan 9 operating system</a> -- see <a href="https://go.dev/talks/2012/splash.article">Go at Google: Language Design in the Service of Software Engineering</a>. Rob Pike is also the author of the Plan 9 editor <a href="https://research.swtch.com/acme">Acme</a>, from which I write this, which Russ Cox ported to UNIX (along with many Plan 9 utilities) in <a href="https://github.com/9fans/plan9port"><code>plan9port</code></a>. Their experience on Plan 9 and Inferno meant many ideas from the Plan 9 C compiler and languages like Alef made it into Go -- the linker architecture, channels, the significance of capitalization, the focus on simplicity, the usage of <a href="https://dl.acm.org/doi/10.1145/6424.315691">&quot;little languages,&quot;</a> etc.</p>
<p>Plan 9 was also where UTF-8 was originally implemented, motivated by the difficulties with UTF-16 -- as Rob Pike writes in <a href="https://commandcenter.blogspot.com/2020/01/utf-8-turned-20-years-old-in-2012.html">UTF-8 turned 20 years old</a>:</p>
<blockquote>
<p>UTF was awful. It had modulo-192 arithmetic, if I remember correctly, and was all but impossible to implement efficiently on old SPARCs with no divide hardware. Strings like &quot;/*&quot; could appear in the middle of a Cyrillic character, making your Russian text start a C comment. And more. It simply wasn't practical as an encoding: think what happens to that slash byte inside a Unix file name.</p>
</blockquote>
<p><a href="https://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt">This email thread</a> tells the story<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>:</p>
<pre><code>Subject: UTF-8 history
From: &quot;Rob 'Commander' Pike&quot; &lt;r (at) google.com&gt;
Date: Wed, 30 Apr 2003 22:32:32 -0700 (Thu 06:32 BST)
To: mkuhn (at) acm.org, henry (at) spsystems.net
Cc: ken (at) entrisphere.com

Looking around at some UTF-8 background, I see the same incorrect
story being repeated over and over.  The incorrect version is:
	1. IBM designed UTF-8.
	2. Plan 9 implemented it.
That's not true.  UTF-8 was designed, in front of my eyes, on a
placemat in a New Jersey diner one night in September or so 1992.

What happened was this.  We had used the original UTF from ISO 10646
to make Plan 9 support 16-bit characters, but we hated it.  We were
close to shipping the system when, late one afternoon, I received a
call from some folks, I think at IBM - I remember them being in Austin
- who were in an X/Open committee meeting.  They wanted Ken and me to
vet their FSS/UTF design.  We understood why they were introducing a
new design, and Ken and I suddenly realized there was an opportunity
to use our experience to design a really good standard and get the
X/Open guys to push it out.  We suggested this and the deal was, if we
could do it fast, OK.  So we went to dinner, Ken figured out the
bit-packing, and when we came back to the lab after dinner we called
the X/Open guys and explained our scheme.  We mailed them an outline
of our spec, and they replied saying that it was better than theirs (I
don't believe I ever actually saw their proposal; I know I don't
remember it) and how fast could we implement it?  I think this was a
Wednesday night and we promised a complete running system by Monday,
which I think was when their big vote was.

So that night Ken wrote packing and unpacking code and I started
tearing into the C and graphics libraries.  The next day all the code
was done and we started converting the text files on the system
itself.  By Friday some time Plan 9 was running, and only running,
what would be called UTF-8.  We called X/Open and the rest, as they
say, is slightly rewritten history.

Why didn't we just use their FSS/UTF?  As I remember, it was because
in that first phone call I sang out a list of desiderata for any such
encoding, and FSS/UTF was lacking at least one - the ability to
synchronize a byte stream picked up mid-run, with less that one
character being consumed before synchronization.  Becuase that was
lacking, we felt free - and were given freedom - to roll our own.

I think the &quot;IBM designed it, Plan 9 implemented it&quot; story originates
in RFC2279.  At the time, we were so happy UTF-8 was catching on we
didn't say anything about the bungled history.  Neither of us is at
the Labs any more, but I bet there's an e-mail thread in the archive
there that would support our story and I might be able to get someone
to dig it out.

So, full kudos to the X/Open and IBM folks for making the opportunity
happen and for pushing it forward, but Ken designed it with me
cheering him on, whatever the history books say.

-rob
</code></pre>
<p>That email chain includes the proposed FSS-UTF (File System Safe UTF) standard:</p>
<pre><code> The proposed UCS transformation format encodes UCS values in the range
 [0,0x7fffffff] using multibyte characters of lengths 1, 2, 3, 4, and 5
 bytes.  For all encodings of more than one byte, the initial byte
 determines the number of bytes used and the high-order bit in each byte
 is set.

 An easy way to remember this transformation format is to note that the
 number of high-order 1's in the first byte is the same as the number of
 subsequent bytes in the multibyte character:

    Bits  Hex Min  Hex Max         Byte Sequence in Binary
 1    7  00000000 0000007f 0zzzzzzz
 2   13  00000080 0000207f 10zzzzzz 1yyyyyyy
 3   19  00002080 0008207f 110zzzzz 1yyyyyyy 1xxxxxxx
 4   25  00082080 0208207f 1110zzzz 1yyyyyyy 1xxxxxxx 1wwwwwww
 5   31  02082080 7fffffff 11110zzz 1yyyyyyy 1xxxxxxx 1wwwwwww 1vvvvvvv

 The bits included in the byte sequence is biased by the minimum value
 so that if all the z's, y's, x's, w's, and v's are zero, the minimum
 value is represented.	In the byte sequences, the lowest-order encoded
 bits are in the last byte; the high-order bits (the z's) are in the
 first byte.

 This transformation format uses the byte values in the entire range of
 0x80 to 0xff, inclusive, as part of multibyte sequences.  Given the
 assumption that at most there are seven (7) useful bits per byte, this
 transformation format is close to minimal in its number of bytes used.
</code></pre>
<p>And the UTF-8 proposal by Ken Thompson:</p>
<pre><code>We define 7 byte types:
T0	0xxxxxxx	7 free bits
Tx	10xxxxxx	6 free bits
T1	110xxxxx	5 free bits
T2	1110xxxx	4 free bits
T3	11110xxx	3 free bits
T4	111110xx	2 free bits
T5	111111xx	2 free bits

Encoding is as follows.
&gt;From hex	Thru hex	Sequence		Bits
00000000	0000007f	T0			7
00000080	000007FF	T1 Tx			11
00000800	0000FFFF	T2 Tx Tx		16
00010000	001FFFFF	T3 Tx Tx Tx		21
00200000	03FFFFFF	T4 Tx Tx Tx Tx		26
04000000	FFFFFFFF	T5 Tx Tx Tx Tx Tx	32
</code></pre>
<p>See the <a href="https://pubs.opengroup.org/onlinepubs/009649899/toc.pdf">File System Safe UCS Transformation Format</a> by The Open Group, this version from 1995.</p>
<p>Importantly, it enables us to seek to the middle of a file or stream and read valid characters, or to handle a corrupted character:</p>
<blockquote>
<p>All of the sequences synchronize on any byte that is not a Tx byte.</p>
</blockquote>
<p>We can highlight the differences from FSS-UTF using <a href="https://datatracker.ietf.org/doc/html/rfc3629">RFC 3629</a>, which uses the same tabular format:</p>
<pre><code>The table below summarizes the format of these different octet types.
The letter x indicates bits available for encoding bits of the
character number.

Char. number range  |        UTF-8 octet sequence
   (hexadecimal)    |              (binary)
--------------------+---------------------------------------------
0000 0000-0000 007F | 0xxxxxxx
0000 0080-0000 07FF | 110xxxxx 10xxxxxx
0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx
0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
</code></pre>
<p>Notably:</p>
<ul>
<li>The format is (as of RFC 3629, the first in the series which is not purely informational) limited to four bytes.</li>
<li>Each subsequent byte has <em>two</em> leading bits, <code>10</code>, which is required to distinguish them from a leading byte.</li>
<li>The first prefix is <code>110</code> not <code>10</code>, also to enable distinguishing leading and following bytes.</li>
</ul>
<p>These changes lead to a less dense representation where three bytes store exactly 16 bits and four bytes can store 21 bits.</p>
<p>In Rob Pike's paper on their implementation, <a href="/resources/pdfs/utf8.pdf"><em>Hello World</em></a>, lies the first mention of <em>Rune</em> I could find:</p>
<blockquote>
<p>On the semantic level, ANSI C allows, but does not tie down, the notion of a wide character and admits string and character constants of this type. We chose the wide character type to be unsigned short. In the libraries, the word Rune is defined by a typedef to be equivalent to unsigned short and is used to signify a Unicode character.</p>
</blockquote>
<p>It seems likely that &quot;rune&quot; originated here, as its a kind of synonym for character (<code>char</code>). Later, ISO C99 Standard: 7.24 introduced &quot;extended multibyte and wide character utilities&quot; including the <code>wchar_t</code> type, see <a href="https://www.gnu.org/software/libunistring/manual/html_node/The-wchar_005ft-mess.html">The <code>wchar_t</code> mess</a>. ISO C99's locale functionality can also be surprising, see this <a href="https://news.ycombinator.com/item?id=36216389">HN post on how <code>isspace()</code> changes with locale</a>.</p>
<p>Interestingly, in Plan 9 C they used an unsigned short (16-bits), but in Go the type is instead a signed int (32-bits) to support additional code points added since 1992. Remember, this encoding was meant to replace a two-byte encoding for exactly 16 bits of data (the Basic Multilingual Plane). In the original email, Ken notes:</p>
<blockquote>
<p>The 4, 5, and 6 byte sequences are only there for political reasons. I would prefer to delete these.</p>
</blockquote>
<p>And aligned with that, the paper mentions:</p>
<pre><code>UTFmax = 3, /* maximum bytes per rune */
</code></pre>
<p>Three bytes could represent a maximum of 16 bits, while four bytes can represent a maximum of 21 bits. In Go, <a href="https://pkg.go.dev/unicode/utf8#pkg-constants"><code>UTFMax = 4</code></a>, and a <code>rune</code> is equivalent to a signed 32-bit integer. In <code>plan9port</code>, <a href="https://github.com/9fans/plan9port/blob/9da5b4451365e33c4f561d74a99ad5c17ff20fed/include/utf.h#L11"><code>UTFmax = 4</code></a>, and <code>Rune</code> is an <em>unsigned</em> integer -- a <a href="https://github.com/9fans/plan9port/commit/0cadb4301d18724e7513d7489cb5bebd262c82f1">change Russ Cox made in late 2009</a>. The Linux man page <a href="https://man7.org/linux/man-pages/man7/utf-8.7.html"><code>utf-8(7)</code></a> notes that ISO 10464 defined UCS-2, a 16-bit code space, and UCS-4, a 31-bit code space; which justifies the signed 32-bit integer representation.</p>
<p>So, we've established <code>Rune</code> as existing at least as early as 1992 when UTF-8 was introduced, and was inherited by Go through its Plan 9 C lineage. Was it in use elsewhere in 1992? Searching the internet, I get a few hits:</p>
<ul>
<li>
<p><a href="https://man.freebsd.org/cgi/man.cgi?query=rune&amp;sektion=3&amp;apropos=0&amp;manpath=FreeBSD+5.4-RELEASE">FreeBSD rune functions</a>, which it inherits from 4.4BSD; which states:</p>
<blockquote>
<p>The setrunelocale() function and the other non-ANSI rune	functions were inspired	by Plan	9 from Bell Labs.</p>
</blockquote>
<p>And further notes:</p>
<blockquote>
<p>The 4.4BSD &quot;rune&quot; functions have been deprecated in favour of the ISO C99 extended <a href="https://man.freebsd.org/cgi/man.cgi?query=multibyte&amp;sektion=3">multibyte and wide character facilities</a> and should not be used in new applications.</p>
</blockquote>
<p><a href="https://wolfram.schneider.org/bsd/44doc/smm/01.setup/paper.pdf">Installing and Operating 4.4BSD UNIX</a> from 1993 also include:</p>
<blockquote>
<p>ANSI C multibyte and wide character support has been integrated. The rune functionality from the Bell Labs' Plan 9 system is provided as well.</p>
</blockquote>
</li>
<li>
<p><a href="https://developer.apple.com/documentation/kernel/rune_t?changes=l___3&amp;language=objc">Apple kernel docs for <code>rune_t</code></a>, since Darwin derives from BSD this likely originates in BSD 4.4.</p>
</li>
<li>
<p><a href="https://chromium.googlesource.com/native_client/nacl-newlib/+/65e6baefeb2874011001c2f843cf3083e771b62f/newlib/libc/sys/linux/include/rune.h"><code>newlib</code></a>, a C standard library implementation including the rune functionality from BSD 4.4.</p>
</li>
<li>
<p>Android's copy of <a href="https://cs.android.com/android/platform/superproject/+/master:external/libutf/rune.c;drc=a91263e8760ffc1d399224e2640b8ec3dd87bff2;l=111?authuser=3&amp;hl=pt"><code>libutf</code></a> from Plan 9, <a href="https://9fans.github.io/plan9port/unix/">ported by Russ Cox to UNIX</a> as part of <code>plan9port</code>.</p>
</li>
<li>
<p>.NET's <a href="https://learn.microsoft.com/en-us/dotnet/fundamentals/runtime-libraries/system-text-rune"><code>System.Text.Rune</code></a> influenced by Go; see the <a href="https://github.com/dotnet/runtime/issues/23578">GitHub issue</a> by <a href="https://tirania.org/blog/">Miguel de Icaza</a>:</p>
<blockquote>
<p>As for why the name rune, the inspiration comes from Go</p>
</blockquote>
</li>
<li>
<p>The <a href="http://magic-cookie.co.uk/jargon/mit_jargon.htm">MIT Jargon File</a> includes:</p>
<blockquote>
<p>runes pl.n.</p>
<ol>
<li>Anything that requires heavy wizardry or black art to parse: core dumps, JCL commands, APL, or code in a language you haven't a clue how to read. Compare casting the runes, Great Runes.</li>
<li>Special display characters (for example, the high-half graphics on an IBM PC).</li>
</ol>
</blockquote>
</li>
</ul>
<p>The Plan 9 rune functionality was incorporated into 4.4 BSD by <a href="https://www.linkedin.com/in/paul-borman-91879/">Paul Borman</a>, and became the ancestor to many of the uses of the term outside of the direct Plan 9 lineage. He would later join Google and contribute to the Go programming language <sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>. In <a href="https://www.tuhs.org/cgi-bin/utree.pl?file=4.4BSD/usr/include/machine/ansi.h"><code>machine/ansi.h</code></a>, we can see that <code>rune_t</code> is defined as an <code>int</code> instead of as a <code>unsigned short</code>, with the following justification:</p>
<pre><code>/*
 * Runes (wchar_t) is declared to be an ``int'' instead of the more natural
 * ``unsigned long'' or ``long''.  Two things are happening here.  It is not
 * unsigned so that EOF (-1) can be naturally assigned to it and used.  Also,
 * it looks like 10646 will be a 31 bit standard.  This means that if your
 * ints cannot hold 32 bits, you will be in trouble.  The reason an int was
 * chosen over a long is that the is*() and to*() routines take ints (says
 * ANSI C), but they use _RUNE_T_ instead of int.  By changing it here, you
 * lose a bit of ANSI conformance, but your programs will still work.
 *
 * Note that _WCHAR_T_ and _RUNE_T_ must be of the same type.  When wchar_t
 * and rune_t are typedef'd, _WCHAR_T_ will be undef'd, but _RUNE_T remains
 * defined for ctype.h.
 */
</code></pre>
<p>The files e.g. <a href="https://www.tuhs.org/cgi-bin/utree.pl?file=4.4BSD/usr/include/rune.h"><code>rune.h</code></a>, <a href="https://www.tuhs.org/cgi-bin/utree.pl?file=4.4BSD/usr/include/runetype.h"><code>runetype.h</code></a> all bear the copyright notice:</p>
<blockquote>
<p>This code is derived from software contributed to Berkeley by Paul Borman at Krystal Technologies.</p>
</blockquote>
<figure>
<img src="/resources/images/2025-05-03-rune/krystal.jpg" alt="A Krystal hamburger location" />
<figcaption>A Krystal hamburger location</figcaption>
</figure>
<p>The only information I can find on Krystal Technologies is that it once owned <a href="https://www.krystal.com">krystal.com</a>, and was later dragged into a <a href="https://www.nashvillepost.com/home/simmering-krystal-domain-name-dispute-resolved/article_7343b4e0-3d73-5ff7-bee0-73a8792f7f61.html">dispute with the Krystal hamburger chain</a> in 2000 (seven years after its mention in the rune files) which ended in a settlement.</p>
<p>FreeBSD still uses the rune types, e.g. in <a href="https://github.com/freebsd/freebsd-src/blob/ec5083a0e890be3e59960e73867b611d32c11c4c/lib/libc/locale/utf8.c"><code>utf8.c</code></a>, to provide wide character support for the UTF-8 locale.</p>
<p>The Unicode and ISO 10646 standards do not contain the term &quot;rune&quot; either. I reached out to Rob Pike on <a href="https://bsky.app/profile/connor.zip/post/3lo7l5brx7k24">Bluesky</a> to ask if &quot;rune&quot; did originate in Plan 9:</p>
<blockquote>
<p>Actually Ken Thompson suggested it while the two of us were brainstorming for a type name that wasn't 'char'. He said triumphantly and I immediately agreed we had it.</p>
<p>Oh yes, and it was the name we needed in Plan 9 for UTF and ISO 10646, before Unicode and UTF-8 and decades before Go.</p>
</blockquote>
<p>And later, Rob <a href="https://github.com/9fans/plan9port/commit/0cadb4301d18724e7513d7489cb5bebd262c82f1">posted</a> a confirmed origin date, thanking <a href="http://www.collyer.net/who/geoff/">Geoff Collyer</a><sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> for searching through the Plan 9 dump:</p>
<blockquote>
<p>The Plan 9 C source files /sys/src/libc/port/<em>rune</em> appeared in the daily backup on Dec 9, 1991, so the name was coined on the evening of the 8th.</p>
</blockquote>
<p>So the term &quot;rune&quot; is over thirty years old, and has made its way from Plan 9 into 4.4 BSD and then several UNIX variants and C libaries, into Go and then into .NET, and through ports of <code>libutf</code> into Android.</p>
<h2 id="epilogue">Epilogue</h2>
<p>Thanks to Adam Pritchard for noting some spelling errors on this post, which motivated me to write a small spelling utility for Acme, <code>Spell</code>, which wraps <a href="http://aspell.net"><code>aspell</code></a>'s peculiar <a href="http://aspell.net/man-html/Through-A-Pipe.html"><code>ispell</code>-compatible</a> output:</p>
<pre><code>#!/usr/bin/env rc

file=`{basename $%}
name=`{dirname $%}'/+Spell'

id=`{9p read acme/index | 9 awk ' $6 == &quot;'$name'&quot; { print $1 }'}
if (~ $id '') id=new
id=`{9p read acme/$id/ctl | 9 awk '{print $1}'}
echo 'name '$name | 9p write acme/$id/ctl

printf , | 9p write acme/$id/addr
9p read acme/$winid/body \
	| 9 sed 's/^/ /' \
	| aspell pipe list --mode=url \
	| 9 awk '
		BEGIN { lines=1 }
		/^&amp;/ { gsub(/:/, &quot;&quot;, $4); print &quot;'$file':&quot; lines &quot;:&quot; $4 &quot;\t&quot; $2  }
		/^#/ { gsub(/:/, &quot;&quot;, $3); print &quot;'$file':&quot; lines &quot;:&quot; $3 &quot;\t&quot; $2  }
		/^$/ { lines++ }' \
	| mc \
	| 9p write acme/$id/data

echo clean | 9p write acme/$id/ctl

</code></pre>
<p>Button 2 clicking <code>Spell</code> in a window's tag opens a new <code>+Spell</code> window (or reuses an existing one) for the current directory, and writes misspelled words prefixed by its address. To navigate to a misspelled word, simply button 3 click on the address and make the correction.</p>
<figure>
<img src="/resources/images/2025-05-03-rune/spell.png" alt="+Spell window in Acme" />
<figcaption><code>+Spell</code> window in Acme</figcaption>
</figure>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>In <a href="https://commandcenter.blogspot.com/2020/01/utf-8-turned-20-years-old-in-2012.html">UTF-8 turned 20 years old</a>, Rob Pike clarifies which diner:</p>
<blockquote>
<p>The diner was the Corner Café in New Providence, New Jersey. We just called it Mom's, to honor the previous proprietor. I don't know if it's still the same, but we went there for dinner often, it being the closest place to the Murray Hill offices. Being a proper diner, it had paper placemats, and it was on one of those placemats that Ken sketched out the bit-packing for UTF-8. It was so easy once we saw it that there was no reason to keep the placemat for notes, and we left it behind. Or maybe we did bring it back to the lab; I'm not sure. But it's gone now.</p>
</blockquote>
&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
<li id="fn:2">
<p>Among his contributions is a <a href="https://github.com/pborman/options"><code>getopt</code> style options pacakge</a> as an alternative to <code>flag</code>.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Geoff Collyer was a member of the technical staff at Bell Labs. He recently spoke about <a href="https://www.youtube.com/watch?v=EOg6UzSss2A">Plan 9 on 64-bit RISC-V</a>.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2025-04-23-noaa-receipts</id>
    <title>NOAA Weather Receipts</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2025-04-23-noaa-receipts" />
    <published>2025-04-23T00:00:00-05:00</published>
    <summary>Printing NOAA weather alerts on a receipt printer in near real-time</summary>
    
    <media:content url="https://connor.zip/resources/images/2025-04-23-noaa-receipts/receipt.jpg" medium="image" width="600" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>This weekend we had a bout of bad weather. Our secondhand NOAA weather radio sounded off repeatedly, local weather broadcasters breathlessly reported rotational formations on television, iPhone's buzzed with emergency alerts, and city sirens sounded announcing a Tornado Warning. Each of these mechanisms bases their operation on the <a href="https://www.weather.gov/lzk">National Weather Service forecast office in Little Rock (LZK)</a>, a part of the National Oceanic and Atmospheric Administration.</p>
<figure>
<img src="/resources/images/2025-04-23-noaa-receipts/radio.jpg" alt="Midland Weather Monitor Model 74-109" />
<figcaption>Midland Weather Monitor Model 74-109</figcaption>
</figure>
<p>NWS makes some of the most important information available through <em>alerts</em> which are broadcast over weather radio following the classic tone. Our weather radio listens for these tones and either blinks an alarm light, emits a siren, or tunes into the broadcast for a preset time before returning to ready mode. What if we could retrieve this information programmatically, and print it out on some sort continuous feed paper?</p>
<h2 id="star-sp-700">Star SP-700</h2>
<figure>
<img src="/resources/images/2025-04-23-noaa-receipts/receipt.jpg" alt="Star SP-700 receipt printer with printed weather alerts" />
<figcaption>Star SP-700 receipt printer with printed weather alerts</figcaption>
</figure>
<p>Months ago, I had stumbled upon a <a href="https://star-emea.com/products/sp700/">Star SP-700</a> high speed, two color, matrix printer at Goodwill; including the 10/100 Base-T interface module which allows for networking. The network interface provides an interactive web interface, support for DHCP, TLS, SNMP, FTP, and even <code>telnet</code>. And, because this is not a <em>thermal</em> receipt printer, there's no risk of <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7071457/">BPS exposure</a>.</p>
<figure>
<img src="/resources/images/2025-04-23-noaa-receipts/web.png" alt="Star SP-700 Web Interface" />
<figcaption>Star SP-700 Web Interface</figcaption>
</figure>
<p>A <code>telnet</code> configuration session:</p>
<pre><code class="language-sh">; telnet star-sp700.home.arpa
Trying 10.0.3.21...
Connected to star-sp700.home.arpa.
Escape character is '^]'.

Welcome to IFBD-HE07/08 TELNET Utility.
Copyright(C) 2005 Star Micronics co., Ltd.

&lt;&lt; Connected Device &gt;&gt;
    Device Model: SP742 (STR-001)
    NIC Product : IFBD-HE07/08
    MAC Address : 00:11:62:23:D2:E4

login: root
Password: ******
Hello root

=== Main Menu ===
  1) IP Parameters Configuration
  2) System Configuration
  3) Change Password
  5) SNMP
 96) Display Status
 97) Reset Settings to Defaults
 98) Save &amp; Restart
 99) Quit

Enter Selection:
</code></pre>
<p>Like FTP, <code>telnet</code> support is surprisingly common on printer network cards, for example the HP LaserJet card. Another similarity, the Star SP-700 supports raw TCP/IP printing on port 9100, which in its case is plain ASCII text punctuated with control codes and using <code>\r\n</code> for line termination. The <a href="https://starmicronics.com/support/Mannualfolder/scp700pm.pdf">SCP700 Series Programmer's Manual</a> enumerates the supported escape sequences in chapter nine. In my program, I used:</p>
<table>
<thead>
<tr>
<th>Sequence</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>&lt;ESC&gt; &quot;4&quot;</code></td>
<td>Select highlight printing (red text)</td>
</tr>
<tr>
<td><code>&lt;ESC&gt; &quot;4&quot;</code></td>
<td>Cancel highlight printing</td>
</tr>
<tr>
<td><code>&lt;ESC&gt; &quot;E&quot;</code></td>
<td>Select emphasized printing (bold text)</td>
</tr>
<tr>
<td><code>&lt;ESC&gt; &quot;F&quot;</code></td>
<td>Cancel emphasized printing</td>
</tr>
</tbody>
</table>
<h2 id="noaa-api">NOAA API</h2>
<p>NOAA makes these alerts available through a well-documented public <a href="https://www.weather.gov/documentation/services-web-api#/default/alerts_active">API</a>, which supports several response formats including two variants of JSON and Atom. The Atom feed contains standard fields, so it <em>should</em> be compatible with a RSS feed reader like NetNewsWire, however it isn't because it expects the <code>Accept</code> header to contain <code>application/xml+atom</code> or it defaults to GeoJSON. A simple proxy which sets the <code>Accept</code> header should make this possible though.</p>
<p>I opted for the <a href="https://json-ld.org">JSON-LD</a> option, which provides us with hyperlinked <code>@id</code>s and references to other objects, which can be fetched simply by following those links. We can even navigate it in our browser, but we'll see the default <a href="https://geojson.org">GeoJSON</a> format; for instance here are the <a href="https://api.weather.gov/alerts/active">active alerts</a>. Since the entire API is based on JSON-LD, the GeoJSON response still contains links.</p>
<p>By polling the active alerts endpoint, we can fetch an up-to-date set of alerts for events such as Tornado Warnings, Tornado Watches, Severe Thunderstorm Warnings, etc. Here is an example <a href="https://api.weather.gov/alerts/urn:oid:2.49.0.1.840.0.0a1f85d83b19fdb9a447cf2f79c781f09bd436eb.002.1">event</a> pulled from that endpoint:</p>
<pre><code class="language-json">{
    &quot;@id&quot;:&quot;https://api.weather.gov/alerts/urn:oid:2.49.0.1.840.0.2cb9a691a88714bdb5bbad42d2e4f414e66cb1d6.001.1&quot;,
    &quot;@type&quot;:&quot;wx:Alert&quot;,
    &quot;id&quot;:&quot;urn:oid:2.49.0.1.840.0.2cb9a691a88714bdb5bbad42d2e4f414e66cb1d6.001.1&quot;,
    &quot;areaDesc&quot;:&quot;Faulkner, AR; Pulaski, AR; Saline, AR&quot;,
    &quot;geometry&quot;:&quot;POLYGON((-92.67 34.51,-92.73 34.55,-92.47 34.9099999,-92.14 34.67,-92.67 34.51))&quot;,
    &quot;geocode&quot;:{&quot;SAME&quot;:[&quot;005045&quot;,&quot;005119&quot;,&quot;005125&quot;],&quot;UGC&quot;:[&quot;ARC045&quot;,&quot;ARC119&quot;,&quot;ARC125&quot;]},
    &quot;affectedZones&quot;:[&quot;https://api.weather.gov/zones/county/ARC045&quot;,&quot;https://api.weather.gov/zones/county/ARC119&quot;,&quot;https://api.weather.gov/zones/county/ARC125&quot;],
    &quot;references&quot;:[],
    &quot;sent&quot;:&quot;2025-04-20T18:31:00-05:00&quot;,
    &quot;effective&quot;:&quot;2025-04-20T18:31:00-05:00&quot;,
    &quot;onset&quot;:&quot;2025-04-20T18:31:00-05:00&quot;,
    &quot;expires&quot;:&quot;2025-04-20T19:15:00-05:00&quot;,
    &quot;ends&quot;:&quot;2025-04-20T19:15:00-05:00&quot;,
    &quot;status&quot;:&quot;Actual&quot;,
    &quot;messageType&quot;:&quot;Alert&quot;,
    &quot;category&quot;:&quot;Met&quot;,
    &quot;severity&quot;:&quot;Extreme&quot;,
    &quot;certainty&quot;:&quot;Observed&quot;,
    &quot;urgency&quot;:&quot;Immediate&quot;,
    &quot;event&quot;:&quot;Tornado Warning&quot;,
    &quot;sender&quot;:&quot;w-nws.webmaster@noaa.gov&quot;,
    &quot;senderName&quot;:&quot;NWS Little Rock AR&quot;,
    &quot;headline&quot;:&quot;Tornado Warning issued April 20 at 6:31PM CDT until April 20 at 7:15PM CDT by NWS Little Rock AR&quot;,
    &quot;description&quot;:&quot;TORLZK\n\nThe National Weather Service in Little Rock has issued a\n\n* Tornado Warning for...\nSouthwestern Faulkner County in central Arkansas...\nCentral Saline County in central Arkansas...\nCentral Pulaski County in central Arkansas...\n\n* Until 715 PM CDT.\n\n* At 630 PM CDT, a severe thunderstorm capable of producing a tornado\nwas located over Haskell, or near Benton, moving northeast at 35\nmph.\n\nHAZARD...Tornado.\n\nSOURCE...Radar indicated rotation.\n\nIMPACT...Flying debris will be dangerous to those caught without\nshelter. Mobile homes will be damaged or destroyed.\nDamage to roofs, windows, and vehicles will occur.  Tree\ndamage is likely.\n\n* Locations impacted include...\nAlexander...                      Otter Creek...\nHiggins...                        College Station...\nNatural Steps...                  Cammack Village...\nSouthwest Little Rock...          Bauxite...\nIronton...                        Argenta...\nQuapaw Quarter...                 Vimy Ridge...\nHillcrest Neighborhood...         Chenal Valley...\nWar Memorial Stadium...           Bryant...\nPinnacle Mountain State Park...   Maumelle...\nThe Heights...                    Shannon Hills...&quot;,
    &quot;instruction&quot;:&quot;TAKE COVER NOW! Move to a basement or an interior room on the lowest\nfloor of a sturdy building. Avoid windows. If you are outdoors, in a\nmobile home, or in a vehicle, move to the closest substantial shelter\nand protect yourself from flying debris.&quot;,
    &quot;response&quot;:&quot;Shelter&quot;,
    &quot;parameters&quot;:{&quot;AWIPSidentifier&quot;:[&quot;TORLZK&quot;],&quot;WMOidentifier&quot;:[&quot;WFUS54 KLZK 202331&quot;],&quot;eventMotionDescription&quot;:[&quot;2025-04-20T23:30:00-00:00...storm...240DEG...30KT...34.54,-92.64&quot;],&quot;maxHailSize&quot;:[&quot;0.00&quot;],&quot;tornadoDetection&quot;:[&quot;RADAR INDICATED&quot;],&quot;BLOCKCHANNEL&quot;:[&quot;EAS&quot;,&quot;NWEM&quot;],&quot;EAS-ORG&quot;:[&quot;WXR&quot;],&quot;VTEC&quot;:[&quot;/O.NEW.KLZK.TO.W.0091.250420T2331Z-250421T0015Z/&quot;],&quot;eventEndingTime&quot;:[&quot;2025-04-21T00:15:00+00:00&quot;],&quot;WEAHandling&quot;:[&quot;Imminent Threat&quot;],&quot;CMAMtext&quot;:[&quot;NWS: TORNADO WARNING in this area til 7:15 PM CDT. Take shelter now. Check media.&quot;],&quot;CMAMlongtext&quot;:[&quot;National Weather Service: TORNADO WARNING in this area until 7:15 PM CDT. Take shelter now in a basement or an interior room on the lowest floor of a sturdy building. If you are outdoors, in a mobile home, or in a vehicle, move to the closest substantial shelter and protect yourself from flying debris. Check media.&quot;]},
    &quot;replacedBy&quot;:&quot;https://api.weather.gov/alerts/urn:oid:2.49.0.1.840.0.79caba517735181c1a45b17add84f4df80cd7466.001.1&quot;,
    &quot;replacedAt&quot;:&quot;2025-04-20T18:55:00-05:00&quot;
}
</code></pre>
<p>Under <code>description</code> and <code>instructions</code>, we can see text which is likely meant for the computerized voice synthesis program used by the NWS weather radio broadcasts. Most of this message is the same between similar events, however the &quot;located over Haskell, or near Benton, moving northeast at 35 mph&quot; is not present elsewhere in the message. We can speculate that it is built from lookup tables based on the <code>eventMotionDescription</code> information. We can also see specific subsections mentioned &quot;Southwestern Faulkner County&quot;, &quot;Central Saline County&quot;, which are possibly created by analyzing how the <code>geometry</code> polygon intersects these regions; the <code>affectedZones</code> only gives us a county-level list.</p>
<h2 id="program">Program</h2>
<p>I've been learning and using the <a href="/resources/pdfs/acme.pdf">Acme</a> editor lately, part of <a href="https://github.com/9fans/plan9port"><code>plan9port</code></a>, and as part of that I've been using the <a href="/resources/pdfs/rc-shell.pdf"><code>rc</code> shell</a> to write shell scripts. As outlined in the paper, <code>rc</code> solves some of the issues which make using <code>bash</code> a pain, notably the rules around quoting and handling spaces in variables. I'll introduce the program in pieces.</p>
<p>Our goal is to print alerts which are:</p>
<ul>
<li>currently active,</li>
<li>pertinent to us, or local to our geographic area,</li>
<li>and haven't been printed before.</li>
</ul>
<p>For the first, we leverage the <a href="https://editor.swagger.io/?url=https://api.weather.gov/openapi.json#operations-default-alerts_active"><code>/alerts/active</code> endpoint</a>.</p>
<pre><code class="language-rc">curl \
    --silent \
    --fail-with-body \
    --show-error \
    --header 'User-Agent: (weather.connor.zip, weather@connor.zip)' \
    --header 'Accept: application/ld+json' \
    https://api.weather.gov/alerts/active?area=AR
</code></pre>
<p>For the second, we first pass <code>?area=AR</code> to the API (as seen above), and then use <code>jq</code> to filter using the <code>affectedZones</code> array after we've determined our zone URL, mine is Pulaski County or <code>https://api.weather.gov/zones/county/ARC119</code>.</p>
<pre><code class="language-rc">... \
    | jq \
    --arg pulaski_zone $pulaski_zone \
    --raw-output \
    '.[&quot;@graph&quot;][] | select(.affectedZones[] | contains($pulaski_zone))'
</code></pre>
<p>For the third, we use a combination of <code>jq</code> wrangling and <code>join</code>. First, we output each entry as a sorted, tab separated file containing two fields: the value of <code>@id</code>, and the full JSON representation of the entry. We store the file in a <code>state</code> directory, with the name <code>new</code>:</p>
<pre><code class="language-rc">... \
    | jq \
    --arg pulaski_zone $pulaski_zone \
    --raw-output \
    '.[&quot;@graph&quot;][] | select(.affectedZones[] | contains($pulaski_zone)) | &quot;\(.[&quot;@id&quot;])\t\(.)&quot;' \
    | sort \
    &gt;state/new
</code></pre>
<p>Next, we join the new entries with a file of previously seen <code>@id</code>s, print lines with keys only found in the <code>new</code> file (previously unseen), and drop the key column to create a <a href="https://github.com/ndjson/ndjson-spec"><code>ndjson</code></a> formatted stream.</p>
<pre><code class="language-rc">if (! test -f state/seen)
    touch state/seen

join -v1 -t '	' state/new state/seen \
    | cut -f2
</code></pre>
<p>Now that we have our set of alerts, we can format them for printing using the escape codes mentioned above:</p>
<pre><code class="language-rc">    | jq \
    --raw-output \
    '&quot;\u001b4\u001bE\(.event)\u001b5\u001bF\r\nFrom: \(.effective)\r\nUntil: \(.expires)\(.description | [scan(&quot;(was|were) located ([^\\.]+.)&quot;)] as $location | if ($location | length &gt; 0) then $location[0][1] | gsub(&quot;\n&quot;; &quot; &quot;) | sub(&quot;(?&lt;a&gt;^[a-z])&quot;; &quot;\(.a|ascii_upcase)&quot;) | &quot;\r\nLocated: \(.)&quot; else &quot;&quot; end)\r\n&quot;'
</code></pre>
<p>We need to dive into this <code>jq</code> query a bit. The first line is easy enough:</p>
<pre><code>\u001b4\u001bE\(.event)\u001b5\u001bF\r\n
</code></pre>
<p>The unicode sequence for <code>ESC</code> (<code>0x1b</code>) is <code>\u001b</code>, it is followed by 4, E, then the value of <code>event</code> which is our header, e.g. &quot;Tornado Warning&quot;, then 5, F, which disable the first two respectively. The line is terminated by <code>\r\n</code>. The next two lines are just as simple, but we have some complex logic for <code>description</code>.</p>
<pre><code>\(.description | [scan(&quot;(was|were) located ([^\\.]+.)&quot;)] as $location | if ($location | length &gt; 0) then $location[0][1] | gsub(&quot;\n&quot;; &quot; &quot;) | sub(&quot;(?&lt;a&gt;^[a-z])&quot;; &quot;\(.a|ascii_upcase)&quot;) | &quot;\r\nLocated: \(.)&quot; else &quot;&quot; end)
</code></pre>
<p>All this happens within a <code>\()</code> which is how expressions are interpolated into strings, we pipe the value of <code>description</code>, a long text field, into <a href="https://jqlang.org/manual/#scan"><code>scan</code></a>. The <code>scan</code> function applies a regular expression and emits the capture groups as a stream. In our case, the regular expression <code>(was|were) located ([^\\.]+.)</code> matches text like &quot;as located over Haskell, or near Benton, moving northeast at 35 mph,&quot; stopping at the first period. The first parenthetical is used to group the or of &quot;was&quot; and &quot;were.&quot; The second wraps a regular expression which uses a inverse character class containing only a <code>.</code> (escaped with <code>\</code>, which is itself escaped with <code>\</code>) -- this matches one or more characters which are not a period. Followed by the metacharacter <code>.</code>, which could match any character but must be a period in this case. We wrap this entire scan in an array <code>[...]</code> -- this will either be <code>[[&quot;was&quot;, &quot;over Haskell, or near Benton, moving northeast at 35\nmph.&quot;]]</code> on a match (doubly nested) or <code>[]</code> on no match.</p>
<blockquote>
<p>To capture all the matches for each input string, use the idiom <code>[ expr ]</code>, e.g. <code>[ scan(regex) ]</code>. If the regex contains capturing groups, the filter emits a stream of arrays, each of which contains the captured strings.</p>
</blockquote>
<p>We <a href="https://jqlang.org/manual/#variable-symbolic-binding-operator">set this to a variable</a> so it can be referenced multiple times in the rest of the rule, <code>$location</code>. Next, we check the length of the array -- if there is a match it will be length one (one array of matches), otherwise zero. In the case we have matched to some location information, we replace any newlines with spaces using <a href="https://jqlang.org/manual/#gsub"><code>gsub</code></a>, then we replace the first character with its uppercase representation using <a href="https://jqlang.org/manual/#sub"><code>sub</code></a> and a named capture group <code>a</code>. Finally, we add a new line with our location information preceded by a field name, otherwise we print no line at all.</p>
<p>Finally, we use our own <code>tcpw</code> command, which simply writes to a TCP socket at a given address. Think <code>nc</code> or <code>dial</code> from <code>plan9port</code>, but which doesn't wait for the connection to close.</p>
<pre><code class="language-rc">... \
    | tcpw --address star-sp700.home.arpa:9100
</code></pre>
<p>The result looks like this:</p>
<figure>
<img src="/resources/images/2025-04-23-noaa-receipts/print.jpg" alt="Weather alert printout" />
<figcaption>Weather alert printout</figcaption>
</figure>
<p>Putting it all together, see <a href="https://github.com/cptaffe/weather/blob/main/alerts"><code>alerts</code></a> on Github.</p>
<h2 id="deploying">Deploying</h2>
<p>Since we use the <code>rc</code> shell, we need to include <code>plan9port</code> in our docker image. Doing so is fairly straightforward using the <code>golang</code> image for alpine, simply install a few prerequisites, clone the repository, add the <code>bin</code> to your <code>$PATH</code>, and run the install script:</p>
<pre><code>FROM golang:1.24.2-alpine3.21

RUN apk add --no-cache \
	jq \
	curl \
	git \
	build-base \
	linux-headers \
	perl

# Install plan9port (works as of 9da5b4451365e33c4f561d74a99ad5c17ff20fed)
ENV PLAN9=/usr/src/plan9port
ENV PATH=&quot;$PATH:$PLAN9/bin&quot;
WORKDIR /usr/src/plan9port
RUN git clone https://github.com/9fans/plan9port.git . &amp;&amp; \
	./INSTALL
</code></pre>
<p>My standard deploy script works as follows:</p>
<pre><code class="language-rc">#!/usr/bin/env rc
flag e +
flag x +

tag=`{git rev-parse --short HEAD}
image='us-south1-docker.pkg.dev/homelab-388417/homelab/weather'

# Build image
docker buildx build --platform linux/amd64 . --tag $image:$tag
docker tag $image:$tag $image:latest
docker push --quiet $image:$tag
docker push --quiet $image:latest

yq 'setpath([&quot;spec&quot;, &quot;template&quot;, &quot;spec&quot;, &quot;containers&quot;, 0, &quot;image&quot;]; &quot;'$image:$tag'&quot;)' &lt;k8s/deployment.yaml | kubectl apply -f -
</code></pre>
<p>We first grab the current commit hash to use for a tag, then we build and push the image. Finally, we use <code>yq</code> to replace the container image and pass to <code>kubectl</code> for application to the cluster.</p>
<p>In Acme, ensure <code>BUILDKIT_PROGRESS=plain</code> is set so that the output can be seen clearly in <code>win</code>.</p>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2025-01-01-tv-tuner</id>
    <title>TV Tuner</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2025-01-01-tv-tuner" />
    <published>2025-01-28T00:00:00-05:00</published>
    <summary>Watching over-the-air broadcast TV with Tvheadend</summary>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>I recently stumbled upon a boxed <a href="https://www.hauppauge.com/pages/products/data_hvr1600.html">Hauppauge WinTV-HVR-1600</a> PCI TV Tuner card at Goodwill, and wondered if I could use it broadcast live TV over my network. I installed it into an available PCI slot on an old Dell Optiplex 755, and attached an antenna.</p>
<figure>
<img src="/resources/images/2025-01-01-tv-tuner/box.jpg" alt="Hauppauge WinTV-HVR 1600 box" />
<figcaption>Hauppauge WinTV-HVR 1600 box</figcaption>
</figure>
<p>We can see the card is recognized over PCI:</p>
<pre><code class="language-sh">; lspci | grep video
03:00.0 Multimedia video controller: Conexant Systems, Inc. CX23418 Single-Chip MPEG-2 Encoder with Integrated Analog Video/Broadcast Audio Decoder
</code></pre>
<p>We can then see the <code>cx18</code> driver for that chip loaded on boot:</p>
<pre><code class="language-sh">; sudo dmesg | grep -i CX23418
[   24.373431] cx18-0: cx23418 revision 01010000 (B)
[   24.623610] tveeprom: audio processor is CX23418 (idx 38)
[   24.623612] tveeprom: decoder processor is CX23418 (idx 31)
[   26.275958] cx18-0: loaded v4l-cx23418-cpu.fw firmware (158332 bytes)
[   26.482992] cx18-0: loaded v4l-cx23418-apu.fw firmware V00120000 (141200 bytes)
[   27.601932] cx18-0 843: loaded v4l-cx23418-dig.fw firmware (16382 bytes)
[   27.621767] cx18-0 843: verified load of v4l-cx23418-dig.fw firmware (16382 bytes)
</code></pre>
<p>To test scanning available channels, install <a href="https://github.com/stefantalpalaru/w_scan2?tab=readme-ov-file">w-scan2</a> on Ubuntu via the <a href="https://code.launchpad.net/~w-scan2/+archive/ubuntu/stable">PPA</a>:</p>
<pre><code class="language-sh">; sudo add-apt-repository ppa:w-scan2/stable
; sudo apt update
; sudo apt install w-scan2
</code></pre>
<p>Then do a scan, in this case outputting a channel list in VLC playlist format:</p>
<pre><code class="language-sh">; sudo w_scan2 -c US -L &gt; chans.xspf
</code></pre>
<h2 id="tvheadend">TVHeadEnd</h2>
<p><a href="https://tvheadend.org">Tvheadend</a> is an open source TV streaming and recording service. To install <code>tvheadend</code> on Ubuntu, follow the <a href="https://docs.tvheadend.org/documentation/installation/linux#deb-packages-debian-ubuntu-raspios">Linux Install Documentation</a>, which amounts to running:</p>
<pre><code class="language-sh">curl -1sLf 'https://dl.cloudsmith.io/public/tvheadend/tvheadend/setup.deb.sh' | sudo -E bash
</code></pre>
<p>Inspect the script to ensure it hasn't been tampered with. It adds the <code>tvheadend/tvheadend</code> repository at <code>/etc/apt/sources.list.d/tvheadend-tvheadend.list</code>, installs their GPG signing key, and ensures some <code>dpkg</code> components like <code>apt-transport-https</code> are installed.</p>
<p>Then install <code>tvheadend</code>:</p>
<pre><code class="language-sh">sudo apt install tvheadend
</code></pre>
<p>You will be prompted for a superuser username and password for the web interface. The <code>tvheadend</code> service is started by default:</p>
<pre><code class="language-sh">systemctl status tvheadend.service
</code></pre>
<p>To see the full logs:</p>
<pre><code class="language-sh">journalctl -u tvheadend.service
</code></pre>
<p>Which reminds us of the port <code>tvheadend</code> is listening on:</p>
<pre><code>Jan 15 01:10:52 typhoon systemd[1]: Started tvheadend.service - Tvheadend - a TV streaming server and DVR.
Jan 15 01:10:52 typhoon tvheadend[14972]: config: Using configuration from '/var/lib/tvheadend'
Jan 15 01:10:52 typhoon tvheadend[14972]: http: Starting HTTP server 0.0.0.0:9981
Jan 15 01:10:52 typhoon tvheadend[14972]: htsp: Starting HTSP server 0.0.0.0:9982
</code></pre>
<p>To allow apps which don't support our self-signed CA which we'll use with a reverse proxy below, we need to enable listening on <code>0.0.0.0</code>, and add a firewall rule. Define a new application profile in <code>/etc/ufw/applications.d/tvheadend</code>:</p>
<pre><code>[Tvheadend]
title=Tvheadend TV streaming server
description=Tvheadend is the leading TV streaming server for Linux.
# HTTP, HTSP
ports=9981,9982/tcp
</code></pre>
<p>Then enable the rule:</p>
<pre><code class="language-sh">; sudo ufw app update tvheadend
; sudo ufw allow tvheadend
Rule added
Rule added (v6)
</code></pre>
<p>Now, the mDNS record generated by the Avahi integration for autodiscovery will match the available ports. Apps like TvhClient on iOS can auto-discover the server using this record. Ideally we would load the self-signed certificate onto the iOS device, then modify the Avahi configuration to reflect the reverse proxied ports 80 and 443 with <code>_http._tcp</code> and <code>_https._tcp</code>. I'm not certain how to handle the HTSP port.</p>
<p>Continue through the wizard's setup process, associate your card (mine is detected as a Samsung S5H1409 QAM/8VSB) with an ATSC-T network, then associate that network with the <code>us-ATSC-center-frequencies-8VSB</code> mux. Once the setup is complete, a scan will commence which takes several minutes. I don't have cable, so the ATSC-C (cable) network is not useful.</p>
<p>From the UI, under the Configuration tab, you can see the relevant objects:</p>
<ul>
<li>Under DVB Inputs, TV adapters: the Linux <code>/dev/dvb/adapter0</code> device and the Samsung S5H1409 frontends for ATSC-T (terrestrial) and ATSC-C (cable).</li>
<li>Under DVB Inputs, Networks: the ATSC-T and ATSC-C networks which each adapter is associated with.</li>
<li>Under DVB Inputs, Muxes: a list of frequencies, each associated with a network.</li>
<li>Under DVB Inputs, Services: a list of channels with their number and name, each associated with a mux. For each mux with a successful scan, there will be an associated service. The <em>Map Services</em> operation will create a channel for each service.</li>
<li>Under Channel / EPG, Channels: a list of channels, each associated with a service. These are the channels which you will see in your Tvheadend clients.</li>
<li>Under Channel / EPG, EPG Grabber Modules: the list of Electronic Program Guide grabber modules. The Guide is constructed from either over-the-air EPG information from the default EIT module, or from service such as XMLTV using the <a href="https://wiki.xmltv.org/index.php/XmltvCapabilities">xmltv commands</a>.</li>
</ul>
<p>The files under <code>/var/lib/tvheadend/</code> represent each of these objects.</p>
<p>The Web UI uses a transcoding profile, <code>webtv-h264-aac-matroska</code>, to transform the raw stream from the tuner into a stream playable in the browser. This extra processing lead to stuttering and audio issues. Using a client like TvhClient on iOS which uses VLC to process the raw stream on-device provides the best experience.</p>
<figure>
<img src="/resources/images/2025-01-01-tv-tuner/tvheadend-ui.png" alt="Tvheadend UI displaying a broadcast channel stream of Forensic Files" />
<figcaption>Tvheadend UI displaying a broadcast channel stream of Forensic Files</figcaption>
</figure>
<p>Unfortunately, <a href="https://tvheadend.org/d/2666-using-analog-tv-with-tvheadend/2">Tvheadend no longer supports analog channels</a>, which limits our card's utility to ATSC-T instead of NTSC (for instance, from a VCR) or FM radio.</p>
<h2 id="iptv">IPTV</h2>
<p>Tvheadend supports MPEG-TS encoded over-the-network streams, however guide (EPG) information must be provided by another source, e.g. XMLTV. See an example of some of these channels at <a href="https://tv.garden"><code>tv.garden</code></a>. Some IPTV channels are documented at <a href="https://github.com/iptv-org/iptv"><code>iptv-org/iptv</code></a>, available as <code>m3u</code> files. Each <code>m3u</code> file contains URLs which point to yet more <code>m3u</code> or <code>m3u8</code> (UTF-8) files, this continues recursively until a file contains a sequence of <code>.ts</code> (MPEG-TS) files. Tvheadend supports some of these streams by default, but can support more with the help of <code>ffmpeg</code> using <code>pipe://</code> URIs:</p>
<p>The <code>/var/lib/tvheadend/ffmpeg-wrapper-m3u.sh</code> script to rewrite <code>m3u</code> files:</p>
<pre><code>#!/usr/bin/env bash
set -euo pipefail

curl -sSL &quot;$1&quot; | awk '/^#/ { print; next } { print &quot;pipe:///var/lib/tvheadend/ffmpeg-wrapper.sh&quot;, $0 }'
</code></pre>
<p>And the <code>/var/lib/tvheadend/ffmpeg-wrapper.sh</code> script which re-encodes MPEG-TS streams:</p>
<pre><code>#!/usr/bin/env bash
set -euo pipefail

ffmpeg -loglevel fatal -i $1 -vcodec copy -acodec copy -f mpegts pipe:1
</code></pre>
<p>To add channels like these to Tvheadend:</p>
<ol>
<li>Under Configuration, DVB Inputs, Networks, choose Add and then select IPTV Automatic Network. Name the network, e.g. <code>iptv-org/pbs</code>, toggle Enabled and Create Bouquet, set the maximum number of input streams to 20 to prevent over-loading the system with scans (or uncheck scan after creation), in the URL field place <code>pipe:///var/lib/tvheadend/ffmpeg-wrapper-m3u.sh</code> followed by a space and the URL of your <code>m3u</code> file, for example the raw link to the Github <code>us_pbs.m3u</code> file, <code>https://raw.githubusercontent.com/iptv-org/iptv/refs/heads/master/streams/us_pbs.m3u</code>.</li>
<li>The system will pipe the <code>m3u</code> file through our <code>ffmpeg-wrapper-m3u.sh</code> script, which will rewrite each <code>m3u8</code> URL entry with a <code>pipe:///var/lib/tvheadend/ffmpeg-wrapper.sh</code> URI to reformat the stream. Then, the system scans each channel and determines if it contains a broadcast. If it does, a <em>mux</em> and a corresponding <em>service</em> are created. Under Configuration, DVB Inputs, Services select <em>Map Services</em> to create channels from each service.</li>
</ol>
<p>Now, these additional IPTV channels will be available as channels in clients, but without guide information.</p>
<h3 id="broadcastify">Broadcastify</h3>
<p>As an example, using <code>ffmpeg</code> we can add audio-only MPEG-A streams to Tvheadend. On <code>broadcastify.com</code>, you can locate these streams once playing in your browser's network tab. They are in the form, <code>https://broadcastify.cdnstream1.com/{id}</code>, where <code>{id}</code> is the feed number.</p>
<ol>
<li>Under Configuration, DVB Inputs, Networks, choose Add then select IPTV. Name the network, e.g. <em>IPTV Manual Network</em>, toggle Enabled. This network can be reused for any manually added IPTV streams in future.</li>
<li>Navigate to the Muxes tab, choose Add. For Network select our newly created <em>IPTV Manual Network</em>, then set EPG Scan to Disabled since this channel has no EPG information. Under URL place <code>pipe:///var/lib/tvheadend/ffmpeg-wrapper.sh https://broadcastify.cdnstream1.com/{id}</code> replacing <code>{id}</code> with your feed id. Set the Mux and Service name with the name of the feed. Click Create.</li>
<li>The new Mux should be automatically scanned. On your server, you should see this scan under <code>journalctl -fu tvheadend.service</code>.</li>
<li>Navigate to Services, you should find a new channel with the name set in step two. Choose Map Services, then Map Selected Services, then locate your service. Click Map Services.</li>
<li>Under Configuration, Channel / EPG, Channels, you should see your new service as a channel.</li>
</ol>
<p>On your client, you'll now see the new channel using the service name set in step two. Playing the channel will stream the MPEG-A stream through <code>ffmpeg</code> to create an MPEG-TS steam for Tvheadend.</p>
<h2 id="reverse-proxy">Reverse Proxy</h2>
<p>For my purposes, I want the HTTP interface to listen on <code>localhost</code> and to proxy to it from <code>nginx</code> listening on the HTTP standard port 80. That way, I can assign <code>dvr.home.arpa</code> to this machine and have that host route to the <code>tvheadend</code> web interface. To do that, we can edit <code>$OPTIONS</code> used by the <code>systemd</code> service at <code>/etc/default/tvheadend</code> and add the <code>--bindaddr</code> option specified in <code>man tvheadend</code> to look as below:</p>
<pre><code>OPTIONS=&quot;-u hts -g video --bindaddr 127.0.0.1&quot;
</code></pre>
<p>Then edit the config at <code>/var/lib/tvheadend/config</code>, gleaning some documentation from <a href="https://github.com/tvheadend/tvheadend/blob/master/src/config.c"><code>config.c</code></a>. Enabling <code>proxy</code> allows for <code>X-Forwarded-For</code> support.</p>
<pre><code>        &quot;proxy&quot;: true,
        &quot;cors_origin&quot;: &quot;https://dvr.home.arpa&quot;,
</code></pre>
<p>Now we can restart <code>tvheadend</code>:</p>
<pre><code class="language-sh">sudo systemctl restart tvheadend.service
</code></pre>
<h3 id="reverse-proxy-1">Reverse Proxy</h3>
<p>To serve HTTPS on 443 via nginx, we need certificates. I use <code>cfssl</code> as described in my post on <a href="/posts/2024-05-05-pki">PKI</a>, after adding <code>servers/typhoon/typhoon.home.arpa.json</code> and adding the expected cert and key files as <code>make</code> targets, we simply run <code>make</code> to construct the certs and back them up.</p>
<p>Remember to add the intermediate and root certificates to form the full chain:</p>
<pre><code class="language-sh">cat servers/typhoon/typhoon.home.arpa-server.pem intermediate-ca.pem ca.pem &gt; servers/typhoon/typhoon.home.arpa-server-chain.pem
</code></pre>
<pre><code class="language-sh">; scp servers/typhoon/typhoon.home.arpa-server{-chain,-key}.pem typhoon.home.arpa:.
typhoon.home.arpa-server-key.pem                                                                                          100% 1679   490.6KB/s   00:00
typhoon.home.arpa-server-chain.pem
</code></pre>
<p>Then on <code>typhoon</code> (our desktop we are installing <code>tvheadened</code> on):</p>
<pre><code class="language-sh">sudo mv typhoon.home.arpa-server-chain.pem /etc/ssl/certs/typhoon.home.arpa-server.pem
sudo chown root:root /etc/ssl/certs/typhoon.home.arpa-server.pem
sudo chmod 777 /etc/ssl/certs/typhoon.home.arpa-server.pem
sudo chmod 644 /etc/ssl/certs/typhoon.home.arpa-server.pem
sudo mv typhoon.home.arpa-server-key.pem /etc/ssl/private/
sudo chown root:ssl-cert /etc/ssl/private/typhoon.home.arpa-server-key.pem
sudo chmod 640 /etc/ssl/private/typhoon.home.arpa-server-key.pem
</code></pre>
<p>Now we can install <code>nginx</code>:</p>
<pre><code class="language-sh">sudo apt install nginx
</code></pre>
<p>The nginx config at <code>/etc/nginx/conf.d/dvr.conf</code> looks like:</p>
<pre><code>server {
    listen       80;
    listen       [::]:80;
    server_name  dvr.home.arpa;
    root         /usr/share/nginx/html;

    return 301 https://$host$request_uri;
}

# Settings for a TLS enabled server.
server {
    listen       443 ssl http2;
    listen       [::]:443 ssl http2;
    server_name  dvr.home.arpa;
    root         /usr/share/nginx/html;

    ssl_certificate &quot;/etc/ssl/certs/typhoon.home.arpa-server.pem&quot;;
    ssl_certificate_key &quot;/etc/ssl/private/typhoon.home.arpa-server-key.pem&quot;;
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  10m;

    location / {
        proxy_pass http://127.0.0.1:9981/;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection &quot;Upgrade&quot;;
    }

    error_page 404 /404.html;
    location = /404.html {
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    }
}
</code></pre>
<p>The <code>Upgrade</code> and <code>Connection</code> headers are required to enable <a href="https://www.f5.com/company/blog/nginx/websocket-nginx">proxying websockets</a>.</p>
<p>And it passes testing:</p>
<pre><code class="language-sh">; sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
</code></pre>
<p>Now we can enable <code>nginx</code>:</p>
<pre><code class="language-sh">sudo systemctl enable nginx.service --now
</code></pre>
<p>And we can allow HTTP and HTTPS traffic through the <code>ufw</code> firewall:</p>
<pre><code class="language-sh">; sudo ufw allow https
Rule added
Rule added (v6)
; sudo ufw allow http
Rule added
Rule added (v6)
</code></pre>
<p>After adding a host override in pfSense to point <code>dvr.home.arpa</code> at the IP statically assigned through DHCP to our desktop, we can navigate to <code>dvr.home.arpa</code>.</p>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2024-12-22-add-network-interface</id>
    <title>Add a Network Interface with Ubuntu</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2024-12-22-add-network-interface" />
    <published>2024-12-22T00:00:00-05:00</published>
    <summary>Configuring a PCIe 10GbE SFP+ card to DHCP at boot time</summary>
    
    <media:content url="https://connor.zip/resources/images/2024-12-22-add-network-interface/hp-nc552sfp.jpg" medium="image" width="800" height="516"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>I recently needed to add a new network card to a Dell Optiplex 755 running Ubuntu Server. It has an integrated GbE port, but I'd already run fiber to this corner of the room and had a spare PCIe <a href="https://www.hpe.com/psnow/doc/c04148619">HP NC552SFP 10GbE 2-port SFP+ card</a>, sporting a couple of <a href="https://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-modules/data_sheet_c78-455693.html">Cisco SFP-10G-SR SFP+ modules</a>. This combo is an affordable way to use 10GbE fiber at home, at <a href="https://www.ebay.com/itm/324170610255">$12/card</a> and <a href="https://www.ebay.com/itm/174074786600">$8/module</a>.</p>
<figure>
<img src="/resources/images/2024-12-22-add-network-interface/hp-nc552sfp.jpg" alt="HP NC552SFP 10GbE 2-port SFP+ card" />
<figcaption>HP NC552SFP 10GbE 2-port SFP+ card</figcaption>
</figure>
<p>After adding the card and rebooting the machine, it connected via the existing USB WiFi adapter, but not the 10GbE interface. The interface is shown under <code>ip addr</code> as <code>enp1s0f0</code> and <code>enp1s0f1</code> (one interface per port), but doesn't have an address assigned -- meaning Linux recognizes and supports the card but hasn't DHCP'd on that interface. Running <code>sudo dhclient</code> assigns it an address.</p>
<p>Ubuntu uses <code>netplan</code>, to add a new interface which will come up and DHCP on boot, we need to add it to the config at <code>/etc/netplan</code>. One way to do this is via the <code>netplan</code> command.</p>
<p>To add a new dual-port 10GbE SFP+ card, we can use:</p>
<pre><code class="language-sh">sudo netplan set &quot;ethernets.enp1s0f0={dhcp4: true, optional: true}&quot;
sudo netplan set &quot;ethernets.enp1s0f1={dhcp4: true, optional: true}&quot;
</code></pre>
<p>Which will generate an <code>/etc/netplan/70-netplan-set.yaml</code> file which looks like:</p>
<pre><code class="language-yaml">network:
  ethernets:
    enp1s0f0:
      dhcp4: true
      optional: true
    enp1s0f1:
      dhcp4: true
      optional: true
</code></pre>
<p>After restart, it'll pick up the new interfaces. We can see the DHCP'd addresses like so:</p>
<pre><code class="language-sh">; ip addr
...
3: enp1s0f0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 10:60:4b:94:c2:90 brd ff:ff:ff:ff:ff:ff
4: enp1s0f1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 10:60:4b:94:c2:94 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.5/16 metric 100 brd 10.0.255.255 scope global dynamic enp1s0f1
       valid_lft 5063sec preferred_lft 5063sec
    ...
</code></pre>
<p>Only one of the ports is assigned because the other is not connected.</p>
<p>Testing with <code>iperf3</code> shows only about 1.4 Gbps between this machine and a Fedora Linux VM on ESXi on an HP DL380 using the same card, through an IBM RackSwitch. To the pfSense VM which serves as my router, I can only get around 1 Gbps (1.4 Gbps using <code>-P 8</code>), which seems to be an issue with pfSense and the <code>vmx</code> devices. Between two Linux VMs <code>iperf3</code> reports 11.1 Gbps, but only 1.03 Gbps between a Linux machine and pfSense (1.88 using <code>-P 8</code>).</p>
<figure class="graphviz">
<svg width="584pt" height="62pt" viewBox="0.00 0.00 584.00 62.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 58)"><polygon fill="white" stroke="none" points="-4,4 -4,-58 580,-58 580,4 -4,4"/><!-- optiplex --><g id="node1" class="node"><title>optiplex</title><polygon fill="none" stroke="black" points="144,-54 0,-54 0,0 144,0 144,-54"/><text text-anchor="middle" x="72" y="-22.7" font-family="Times,serif" font-size="14.00">Dell Optiplex 770</text></g><!-- switch --><g id="node2" class="node"><title>switch</title><polygon fill="none" stroke="black" points="392.25,-54 213,-54 213,0 392.25,0 392.25,-54"/><text text-anchor="middle" x="302.62" y="-22.7" font-family="Times,serif" font-size="14.00">IBM RackSwitch G8124</text></g>
<!-- optiplex&#45;&gt;switch -->
<g id="edge1" class="edge">
<title>optiplex&#45;&gt;switch</title>
<path fill="none" stroke="black" d="M144.4,-27C162.38,-27 182.03,-27 201.15,-27"/>
<polygon fill="black" stroke="black" points="201.14,-30.5 211.14,-27 201.14,-23.5 201.14,-30.5"/>
<text text-anchor="middle" x="178.5" y="-31.7" font-family="Times,serif" font-size="14.00">Fiber</text>
</g>
<!-- server -->
<g id="node3" class="node">
<title>server</title>
<polygon fill="none" stroke="black" points="576,-54 461.25,-54 461.25,0 576,0 576,-54"/>
<text text-anchor="middle" x="518.62" y="-22.7" font-family="Times,serif" font-size="14.00">HP DL380G7</text>
</g>
<!-- switch&#45;&gt;server -->
<g id="edge2" class="edge">
<title>switch&#45;&gt;server</title>
<path fill="none" stroke="black" d="M392.52,-27C411.55,-27 431.36,-27 449.45,-27"/>
<polygon fill="black" stroke="black" points="449.26,-30.5 459.26,-27 449.26,-23.5 449.26,-30.5"/>
<text text-anchor="middle" x="426.75" y="-31.7" font-family="Times,serif" font-size="14.00">Fiber</text>
</g>
</g>
</svg>
</figure>
<h2 id="wifi">WiFi</h2>
<p>I had previously configured a <a href="https://www.tp-link.com/us/home-networking/usb-adapter/archer-t2u-plus/">TP-LINK Archer T2U Plus</a> USB WiFi adapter. To configure it, first determine the device name via <code>ip addr</code> (in my case, <code>wlx984827e92b5a</code>), then configure <code>netplan</code> with the SSID and password:</p>
<pre><code class="language-sh">sudo netplan set &quot;wifis.wlx984827e92b5a={access-points: {My SSID: {password: my-password}}, dhcp4: true, optional: true}&quot;
</code></pre>
<p>Which will generate an <code>/etc/netplan/70-netplan-set.yaml</code> file which looks like:</p>
<pre><code class="language-yaml">network:
  wifis:
    wlx984827e92b5a:
      access-points:
        My SSID:
          password: my-password
      dhcp4: true
      optional: true
</code></pre>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2024-12-15-vt220</id>
    <title>VT220</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2024-12-15-vt220" />
    <published>2024-12-15T00:00:00-05:00</published>
    <summary>Using a VT220 with a MacBook Pro</summary>
    
    <media:content url="https://connor.zip/resources/images/2024-12-15-vt220/vt220-irssi.jpg" medium="image" width="600" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>A few years ago, I stumbled upon a DEC VT220 on Facebook Marketplace labeled &quot;old monitor.&quot; Of course I messaged the seller immediately and drove the half hour to meet at a Starbucks. I'd set it up before on a modern macOS system, but had issues with corrupt characters on scroll and placed it on a shelf. While testing an IBM 3151, I dusted it off and realized it's actually fully functional!</p>
<figure>
<img src="/resources/images/2024-12-15-vt220/vt220-irssi.jpg" alt="The irssi IRC client viewing #irssi on a VT220 terminal" />
<figcaption>The irssi IRC client viewing #irssi on a VT220 terminal</figcaption>
</figure>
<p>To connect the VT220 to your system, you'll need a USB to serial adapter like the <a href="https://tripplite.eaton.com/keyspan-high-speed-usb-to-serial-adapter~USA19HS">TRIPP-LITE Keyspan (USA-19HS)</a>, a null modem cable or null modem adapter, a DB9 to DB25 adapter, and possibly a DB9 or DB25 gender changer -- keep in mind both the Keyspan and the VT220 are male connectors.</p>
<figure class="graphviz">
<svg width="563pt" height="62pt" viewBox="0.00 0.00 563.00 62.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 58)"><polygon fill="white" stroke="none" points="-4,4 -4,-58 559,-58 559,4 -4,4"/><!-- ibook --><g id="node1" class="node"><title>ibook</title><polygon fill="none" stroke="black" points="72,-54 0,-54 0,0 72,0 72,-54"/><text text-anchor="middle" x="36" y="-22.7" font-family="Times,serif" font-size="14.00">iBook</text></g><!-- keyspan --><g id="node2" class="node"><title>keyspan</title><polygon fill="none" stroke="black" points="223.5,-54 134.25,-54 134.25,0 223.5,0 223.5,-54"/><text text-anchor="middle" x="178.88" y="-22.7" font-family="Times,serif" font-size="14.00">Keyspan</text></g><!-- ibook&#45;&gt;keyspan -->
<g id="edge1" class="edge">
<title>ibook&#45;&gt;keyspan</title>
<path fill="none" stroke="black" d="M72.19,-27C87.35,-27 105.45,-27 122.35,-27"/>
<polygon fill="black" stroke="black" points="122.29,-30.5 132.29,-27 122.29,-23.5 122.29,-30.5"/>
<text text-anchor="middle" x="103.12" y="-31.7" font-family="Times,serif" font-size="14.00">USB</text>
</g>
<!-- nullmodem -->
<g id="node3" class="node">
<title>nullmodem</title>
<polygon fill="none" stroke="black" points="408,-54 294.75,-54 294.75,0 408,0 408,-54"/>
<text text-anchor="middle" x="351.38" y="-22.7" font-family="Times,serif" font-size="14.00">Null Modem</text>
</g>
<!-- keyspan&#45;&gt;nullmodem -->
<g id="edge2" class="edge">
<title>keyspan&#45;&gt;nullmodem</title>
<path fill="none" stroke="black" d="M223.78,-27C241.76,-27 262.98,-27 282.86,-27"/>
<polygon fill="black" stroke="black" points="282.85,-30.5 292.85,-27 282.85,-23.5 282.85,-30.5"/>
<text text-anchor="middle" x="259.12" y="-31.7" font-family="Times,serif" font-size="14.00">Serial</text>
</g>
<!-- vt220 -->
<g id="node4" class="node">
<title>vt220</title>
<polygon fill="none" stroke="black" points="555,-54 479.25,-54 479.25,0 555,0 555,-54"/>
<text text-anchor="middle" x="517.12" y="-22.7" font-family="Times,serif" font-size="14.00">VT220</text>
</g>
<!-- nullmodem&#45;&gt;vt220 -->
<g id="edge3" class="edge">
<title>nullmodem&#45;&gt;vt220</title>
<path fill="none" stroke="black" d="M408.27,-27C427.51,-27 448.89,-27 467.51,-27"/>
<polygon fill="black" stroke="black" points="467.48,-30.5 477.48,-27 467.48,-23.5 467.48,-30.5"/>
<text text-anchor="middle" x="443.62" y="-31.7" font-family="Times,serif" font-size="14.00">Serial</text>
</g>
</g>
</svg>
</figure>
<p>As of writing, the <a href="https://tripplite.eaton.com/keyspan-high-speed-usb-to-serial-adapter~USA19HS">TRIPP-LITE Keyspan (USA-19HS)</a> drivers for macOS are not yet available for Sequoia, and previous driver versions fail to install with an error about the kernel extension signature. Fortunately, this USB serial adapter has been manufactured long enough that there's a driver for OS X 10.4, the version running on my iBook available from this <a href="http://www.fosh.com.au/article/keyspan-device-drivers">Keyspan Driver Archive</a> -- I chose <a href="http://www.fosh.com.au/downloads/ks/USA-49WG-Keyspan-Driver-MacOSX.zip">Model USA-49WG Keyspan Driver 2.5 - Mac OSX 10.2.8 - 10.4.x</a> and was able to connect my adapter and see it under <code>/dev/tty.KeySerial1</code>.</p>
<h2 id="macbook">MacBook</h2>
<p>On modern macOS, the adapter is available only via a longer adapter-specific name and as both a <code>tty</code> and <code>cu</code> variant, of which we'd use <code>cu</code>. See <a href="https://www.club.cc.cmu.edu/~mdille3/doc/mac_osx_serial_console.html">Setting up a Serial Console in Mac OS X</a> for details setting up a <code>launchd</code> service for <code>getty</code> as <code>/Library/LaunchDaemons/vt220.plist</code>:</p>
<pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
&lt;!DOCTYPE plist PUBLIC &quot;-//Apple//DTD PLIST 1.0//EN&quot; &quot;http://www.apple.com/DTDs/PropertyList-1.0.dtd&quot;&gt;
&lt;plist version=&quot;1.0&quot;&gt;
&lt;dict&gt;
        &lt;key&gt;Label&lt;/key&gt;
        &lt;string&gt;vt220&lt;/string&gt;
        &lt;key&gt;ProgramArguments&lt;/key&gt;
        &lt;array&gt;
                &lt;string&gt;/usr/libexec/getty&lt;/string&gt;
                &lt;string&gt;vt220&lt;/string&gt;
                &lt;string&gt;cu.serial-100014371&lt;/string&gt;
        &lt;/array&gt;
        &lt;key&gt;KeepAlive&lt;/key&gt;
        &lt;true/&gt;
&lt;/dict&gt;
&lt;/plist&gt;
</code></pre>
<p>The first argument to <code>getty</code> tells it the <em>type</em> of the terminal, defined in <code>/etc/gettytab</code>, and the second is the Keyspan device name as found under <code>/dev</code>. The <code>gettytab</code> entry is as follows (see <a href="https://man.freebsd.org/cgi/man.cgi?gettytab"><code>man gettytab</code></a>), note <code>al</code> is optional and defines a user to automatically login:</p>
<pre><code>vt220:\
        :np:im=\r\n:sp#19200:al=cptaffe:tt=vt220:
</code></pre>
<h2 id="ibook">iBook</h2>
<p>On OS X 10.4, we still have a functional <code>/etc/ttys</code> file (see <a href="https://man.freebsd.org/cgi/man.cgi?ttys"><code>man ttys</code></a>), so we can add a new entry for our serial adapter:</p>
<pre><code># name         getty                      type  status    comments
tty.KeySerial1 &quot;/usr/libexec/getty vt220&quot; vt220 on secure # VT220 via USB serial adapter
</code></pre>
<p>This instructs <code>getty</code> to use the serial port adapter as a serial console, with a <em>type</em> of <code>vt220</code> found in <code>/etc/gettytab</code>. We could also use the <code>std.19200</code> <em>type</em> here. This profile expects a console at 19200 baud, so we need to configure our VT220 as such. It also sets the <code>vt220</code> terminal type (<code>$TERM</code>) so that the system understands how to interact with it. For more information see <a href="https://man.freebsd.org/cgi/man.cgi?getty"><code>man getty</code></a>. The <code>/etc/gettytab</code> entry is the same as above, see <a href="https://man.freebsd.org/cgi/man.cgi?gettytab"><code>man gettytab</code></a> for details:</p>
<pre><code>vt220:\
        :np:sp#19200:al=cptaffe:tt=vt220:
</code></pre>
<p>The options have the following meanings, for undocumented FreeBSD options we use the <a href="https://man.openbsd.org/gettytab.5">OpenBSD <code>man gettytab</code></a>.</p>
<table>
<thead>
<tr>
<th>Option</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>np</code></td>
<td>Terminal uses no parity (i.e., 8-bit characters)</td>
</tr>
<tr>
<td><code>im</code></td>
<td>Initial (banner) message</td>
</tr>
<tr>
<td><code>sp</code></td>
<td>Line speed (input and output)</td>
</tr>
<tr>
<td><code>al</code></td>
<td>User to auto-login instead of prompting</td>
</tr>
<tr>
<td><code>tt</code></td>
<td>Terminal type (for environment, e.g. <code>$TERM</code>)</td>
</tr>
</tbody>
</table>
<p>Updating <code>/etc/ttys</code> requires a reboot, <code>sudo kill -HUP 1</code> was unsuccessful for me.</p>
<h2 id="vt220">VT220</h2>
<p>The VT220 <em>must</em> have a keyboard to operate, and it's a very particular keyboard which communicates over serial protocol to the terminal<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. To open the Set-Up Directory with the LK201, press F3. To select an item use arrow keys and the Enter key from the numpad (not Return).</p>
<p>Under the Display section, I have:</p>
<ul>
<li>Interpret Controls</li>
<li>No Auto Wrap</li>
<li>Jump Scroll</li>
<li>Light Text, Dark Screen</li>
<li>Cursor</li>
<li>Block Style Cursor</li>
</ul>
<p>Under the General section, I have:</p>
<ul>
<li>VT200 Mode, 7 Bit Controls</li>
<li>User Defined Keys Locked</li>
<li>User Features Unlocked</li>
<li>Multinational</li>
<li>Numeric Keypad</li>
<li>Normal Cursor Keys</li>
<li>No New Line</li>
</ul>
<p>With <code>TERM=vt220</code>, the 7-bit controls are expected. Some systems like OS X have a definition for <code>vt220-8bit</code>, seen in the output of the <code>toe</code> command. VT200 mode supports more keys than VT100 mode. OS X does not function correctly with the <em>No New Line</em> setting, but toggling to <em>New Line</em> leads to duplicate prompt lines -- Linux (e.g. via <code>telnet</code>) works well with <em>No New Line</em>.</p>
<p>Under the Communications section, I have:</p>
<ul>
<li>Transmit=19200</li>
<li>Receive=19200</li>
<li>No XOFF</li>
<li>8 Bits, No Parity</li>
<li>1 Stop Bit</li>
<li>No Local Echo</li>
<li>EIA Port, Modem Control</li>
<li>Disconnect, 2s Delay</li>
<li>Limited Transit</li>
</ul>
<p>Toggling <em>EIA Port, Modem Control</em> instead of <em>EIA Port, Data Leads Only</em> lead to a much smoother experience with paging data such as <code>man</code> pages because it enables hardware flow control. Without hardware flow control, the VT220 will sometimes struggle to handle the flow of incoming characters -- especially if <em>Smooth Scroll</em> is enabled. Additionally, when modem controls are enabled <code>XOFF</code> is no longer necessary, which solves issues with beeping within <code>man</code> and issues with <code>irssi</code><sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>. The <code>getty</code> config <code>std.19200</code> expects both a speed of 19200 and 8N1: 8-bit, no parity, 1 stop bit.</p>
<h2 id="usage">Usage</h2>
<p>Once logged into the iBook from the VT220, I <code>telnet</code> (to bypass unsupported SSH algorithms) to a Linux VM which serves as a console server on my local network, and <code>tmux attach</code> to a shared <code>tmux</code> session where I run <code>irssi</code> for use by my IBM PC XT or other <code>telnet</code> clients.</p>
<p>From my macOS machine with iTerm, I <code>ssh</code> into the same machine and run <code>tmux attach -CC</code> which opens a native iTerm window for each <code>tmux</code> window. Now, when I select any <code>tmux</code> window, the VT220 (and any other clients) will refresh and display that window. The main advantage is that with the VT220 on my desk, I can use it an interface for IRC, <code>vim</code>, etc. while using a comfortable, clicky IBM Model M keyboard instead of the notably subpar membranes on the LK201.</p>
<figure class="graphviz">
<svg width="630pt" height="134pt" viewBox="0.00 0.00 629.75 134.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 130)"><polygon fill="white" stroke="none" points="-4,4 -4,-130 625.75,-130 625.75,4 -4,4"/><!-- vt220 --><g id="node1" class="node"><title>vt220</title><polygon fill="none" stroke="black" points="97.5,-126 21.75,-126 21.75,-72 97.5,-72 97.5,-126"/><text text-anchor="middle" x="59.62" y="-94.7" font-family="Times,serif" font-size="14.00">VT220</text></g><!-- ibook --><g id="node2" class="node"><title>ibook</title><polygon fill="none" stroke="black" points="229,-126 157,-126 157,-72 229,-72 229,-126"/><text text-anchor="middle" x="193" y="-94.7" font-family="Times,serif" font-size="14.00">iBook</text></g><!-- vt220&#45;&gt;ibook -->
<g id="edge1" class="edge">
<title>vt220&#45;&gt;ibook</title>
<path fill="none" stroke="black" d="M97.92,-99C112.81,-99 130.12,-99 145.8,-99"/>
<polygon fill="black" stroke="black" points="145.36,-102.5 155.36,-99 145.36,-95.5 145.36,-102.5"/>
</g>
<!-- misc -->
<g id="node5" class="node">
<title>misc</title>
<polygon fill="none" stroke="black" points="417.25,-89 306.25,-89 306.25,-35 417.25,-35 417.25,-89"/>
<text text-anchor="middle" x="361.75" y="-57.7" font-family="Times,serif" font-size="14.00">Console VM</text>
</g>
<!-- ibook&#45;&gt;misc -->
<g id="edge4" class="edge">
<title>ibook&#45;&gt;misc</title>
<path fill="none" stroke="black" d="M229.36,-91.15C248.42,-86.93 272.55,-81.57 294.92,-76.61"/>
<polygon fill="black" stroke="black" points="295.59,-80.04 304.59,-74.46 294.07,-73.21 295.59,-80.04"/>
<text text-anchor="middle" x="268" y="-91.73" font-family="Times,serif" font-size="14.00">Telnet</text>
</g>
<!-- macbook -->
<g id="node3" class="node">
<title>macbook</title>
<polygon fill="none" stroke="black" points="119.25,-54 0,-54 0,0 119.25,0 119.25,-54"/>
<text text-anchor="middle" x="59.62" y="-22.7" font-family="Times,serif" font-size="14.00">MacBook Pro</text>
</g>
<!-- iterm -->
<g id="node4" class="node">
<title>iterm</title>
<polygon fill="none" stroke="black" points="229.75,-54 156.25,-54 156.25,0 229.75,0 229.75,-54"/>
<text text-anchor="middle" x="193" y="-22.7" font-family="Times,serif" font-size="14.00">iTerm</text>
</g>
<!-- macbook&#45;&gt;iterm -->
<g id="edge2" class="edge">
<title>macbook&#45;&gt;iterm</title>
<path fill="none" stroke="black" d="M119.4,-27C127.85,-27 136.45,-27 144.65,-27"/>
<polygon fill="black" stroke="black" points="144.38,-30.5 154.38,-27 144.38,-23.5 144.38,-30.5"/>
</g>
<!-- iterm&#45;&gt;misc -->
<g id="edge3" class="edge">
<title>iterm&#45;&gt;misc</title>
<path fill="none" stroke="black" d="M230.18,-34.59C249.11,-38.57 272.88,-43.56 294.93,-48.18"/>
<polygon fill="black" stroke="black" points="293.94,-51.55 304.45,-50.18 295.38,-44.7 293.94,-51.55"/>
<text text-anchor="middle" x="268" y="-51.3" font-family="Times,serif" font-size="14.00">SSH</text>
</g>
<!-- tmux -->
<g id="node6" class="node">
<title>tmux</title>
<polygon fill="none" stroke="black" points="522.5,-89 454.25,-89 454.25,-35 522.5,-35 522.5,-89"/>
<text text-anchor="middle" x="488.38" y="-57.7" font-family="Times,serif" font-size="14.00">tmux</text>
</g>
<!-- misc&#45;&gt;tmux -->
<g id="edge5" class="edge">
<title>misc&#45;&gt;tmux</title>
<path fill="none" stroke="black" d="M417.41,-62C425.76,-62 434.3,-62 442.41,-62"/>
<polygon fill="black" stroke="black" points="442.4,-65.5 452.4,-62 442.4,-58.5 442.4,-65.5"/>
</g>
<!-- irssi -->
<g id="node7" class="node">
<title>irssi</title>
<polygon fill="none" stroke="black" points="621.75,-89 559.5,-89 559.5,-35 621.75,-35 621.75,-89"/>
<text text-anchor="middle" x="590.62" y="-57.7" font-family="Times,serif" font-size="14.00">irssi</text>
</g>
<!-- tmux&#45;&gt;irssi -->
<g id="edge6" class="edge">
<title>tmux&#45;&gt;irssi</title>
<path fill="none" stroke="black" d="M522.76,-62C530.73,-62 539.35,-62 547.65,-62"/>
<polygon fill="black" stroke="black" points="547.58,-65.5 557.58,-62 547.58,-58.5 547.58,-65.5"/>
</g>
</g>
</svg>
</figure>
<p>See <a href="https://blog.joelbuckley.com.au/2021/07/os-x-vt220-part-1">Integrating a VT220 into my OS X workflow</a> for another more elegant keyboardless solution (and the <em>More Information</em> section for a plethora of useful links).</p>
<h2 id="apps">Apps</h2>
<p>Most command-line programs and visual applications work well on a VT220, including <code>irssi</code> for IRC chats, <code>vim</code> for editing, <code>tmux</code> for terminal multiplexing, and <code>lynx</code> for navigating text-based websites. Joel Buckley writes about <a href="https://blog.joelbuckley.com.au/2021/07/os-x-vt220-part-2">using <code>mutt</code> for email on a VT510</a>. A monochrome display means that any applications which depend on colored output won't work well, and some application themes may not map well to monochrome; however the amber glow is what gives the terminal its beauty.</p>
<figure>
<img src="/resources/images/2024-12-15-vt220/vt220-lynx.jpg" alt="Lynx browser viewing a Wikipedia article on a VT220 terminal" />
<figcaption>Lynx browser viewing a Wikipedia article on a VT220 terminal</figcaption>
</figure>
<h2 id="terminal-mux">Terminal Mux</h2>
<p>A USB serial adapter is great until you need to move your laptop, and a dedicated PC likely only has a single serial port. What if you want to run multiple terminals at once? Enter the terminal multiplexer: a networked appliance with a number of serial ports.</p>
<p>Terminal multiplexers like the DEC MUXserver could serve hundreds of terminals, or modems, over a single Ethernet connection<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. In the age of the ubiquitous publicly switched telephone network, modems were the way to connect remotely -- to workers homes or to satellite offices. And with the advent of dial-up Internet, ISPs procured these same appliances, such as the Livingston (later Lucent) Portmaster series<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, to provide dial-up service using a fleet of serial-attached modems.</p>
<figure>
<img src="/resources/images/2024-12-15-vt220/vt220-portmaster.jpg" alt="Interfacing with a Portmaster 2 from a VT220" />
<figcaption>Interfacing with a Portmaster 2 from a VT220</figcaption>
</figure>
<p>I acquired a Livingston Portmaster 2 for a dial-up project on my home telephone network. To use it as a terminal server, we need a null modem adapter and DB25 cable. For longer connections, the DB25 to RJ45 adapters with configurable pin-out are popular. Once attached, the default connection is 9600 baud 8N1 on <code>s0</code>. The Portmaster 2 doesn't support DHCP, so it must be configured with a static IP address and netmask like so:</p>
<pre><code>ComOS - Livingston PortMaster

login: !root
Password:
portmaster2&gt; ifconfig
ether0: flags=16&lt;IP_UP,IPX_DOWN,BROADCAST&gt;
        inet 10.0.3.16 netmask ffff0000 broadcast 10.0.0.0 mtu 1500
portmaster2&gt; set ether0 address 10.0.3.16
Local (ether0) address changed from 10.0.3.16 to 10.0.3.16
portmaster2&gt; set ether0 netmask 255.255.0.0
ether0 netmask changed from 255.255.0.0 to 255.255.0.0
portmaster2&gt; set gateway 10.0.0.1
Gateway changed from 16.3.0.10.in-addr.arpa to 10.0.0.1, metric = 1
portmaster2&gt; set nameserver 10.0.0.1
Name Server changed from 192.168.1.1 to 10.0.0.1
portmaster2&gt; set domain home.arpa
Domain changed from heavy.computer to home.arpa
portmaster2&gt; save all
...
</code></pre>
<p>Note that by default the gateway was the reverse lookup for the <code>address</code>. Out of curiosity I tested this on my local network with <code>dig -t PTR 16.3.0.10.in-addr.arpa</code> and found that it would fail because there was no server with authority for <code>10.in-addr.arpa</code> -- however <code>nslookup</code> would work. The <a href="https://forum.opnsense.org/Archive/16_1_Legacy_Series/Unbound_and_stub_local_reverse_zones">solution</a>, if using pfSense, is to add the following to <em>Custom options</em> under <em>Services &gt; DNS Resolver</em>:</p>
<pre><code>local-zone: &quot;10.in-addr.arpa&quot; transparent
</code></pre>
<p>See the <a href="http://www.bitsavers.org/pdf/livingstonEnterprises/950-1201B_Configuration_Guide_for_Portmaster_Products_Dec95.pdf">Configuration Guide</a> for more information. Once configured, I registered it on my local DNS as <code>portmaster2.home.arpa</code> where it is reachable over <code>telnet</code> for remote administration. My terminal is now connected to the second serial port, <code>s1</code>, because <code>s0</code> is a special diagnostic port which cannot be configured (controlled by DIP switch):</p>
<pre><code>portmaster2&gt; show s1
----------------------- Current Status - Port S1 ---------------------------
        Status: USERNAME
         Input: 0                        Parity Errors: 0
        Output: 11                      Framing Errors: 0
       Pending: 0                       Overrun Errors: 0
  Modem Status: DCD-  CTS+

                Active Configuration    Default Configuration
                --------------------    ---------------------
     Port Type: Login                   Login
    Baud Rates: 9600                    9600,9600,9600
        Parity: none                    none
 Modem Control: off                     off

 Terminal Type: vt220
</code></pre>
<p>We can change settings, like enabling hardware flow control, modem control, and increasing the speed:</p>
<pre><code>portmaster2&gt; set s1 rts/cts on
RTS/CTS flow control for port S1 changed from off to on
portmaster2&gt; set s1 speed 19200
Speed for port S1 (1) changed from 9600 to 19200
portmaster2&gt; reset s1
Resetting port S1
</code></pre>
<p>We can set up the port to automatically connect to a remote host with <code>TERM=vt220</code>:</p>
<pre><code>portmaster2&gt; set s1 service_login telnet
Login service for port S1 changed from portmaster to telnet
portmaster2&gt; set s1 host misc.home.arpa
Host changed from 192.168.1.1 to misc.home.arpa for S1
portmaster2&gt; set s2 termtype vt220
Terminal Type for port S2 changed from  to vt220
</code></pre>
<p>Now, connecting a terminal automatically connects us to the VM at <code>misc.home.arpa</code> over <code>telnet</code>!</p>
<figure class="graphviz">
<svg width="475pt" height="62pt" viewBox="0.00 0.00 475.25 62.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 58)"><polygon fill="white" stroke="none" points="-4,4 -4,-58 471.25,-58 471.25,4 -4,4"/><!-- vt220 --><g id="node1" class="node"><title>vt220</title><polygon fill="none" stroke="black" points="75.75,-54 0,-54 0,0 75.75,0 75.75,-54"/><text text-anchor="middle" x="37.88" y="-22.7" font-family="Times,serif" font-size="14.00">VT220</text></g><!-- portmaster --><g id="node2" class="node"><title>portmaster</title><polygon fill="none" stroke="black" points="334.5,-54 147,-54 147,0 334.5,0 334.5,-54"/><text text-anchor="middle" x="240.75" y="-22.7" font-family="Times,serif" font-size="14.00">Livingston Portmaster 2</text>
</g>
<!-- vt220&#45;&gt;portmaster -->
<g id="edge1" class="edge">
<title>vt220&#45;&gt;portmaster</title>
<path fill="none" stroke="black" d="M76.19,-27C93.11,-27 114.06,-27 135.25,-27"/>
<polygon fill="black" stroke="black" points="135.17,-30.5 145.17,-27 135.17,-23.5 135.17,-30.5"/>
<text text-anchor="middle" x="111.38" y="-31.7" font-family="Times,serif" font-size="14.00">Serial</text>
</g>
<!-- vm -->
<g id="node3" class="node">
<title>vm</title>
<polygon fill="none" stroke="black" points="467.25,-54 411,-54 411,0 467.25,0 467.25,-54"/>
<text text-anchor="middle" x="439.12" y="-22.7" font-family="Times,serif" font-size="14.00">VM</text>
</g>
<!-- portmaster&#45;&gt;vm -->
<g id="edge2" class="edge">
<title>portmaster&#45;&gt;vm</title>
<path fill="none" stroke="black" d="M334.76,-27C357.64,-27 380.89,-27 399.54,-27"/>
<polygon fill="black" stroke="black" points="399.27,-30.5 409.27,-27 399.27,-23.5 399.27,-30.5"/>
<text text-anchor="middle" x="372.75" y="-31.7" font-family="Times,serif" font-size="14.00">Telnet</text>
</g>
</g>
</svg>
</figure>
<h3 id="hardware-flow-control">Hardware Flow Control</h3>
<p>The Portmaster doesn't support DTR/DSR, see the <a href="/resources/pdfs/portmaster2-configuration-guide.pdf">PortMaster Configuration Guide</a> page 6-19:</p>
<blockquote>
<p>Note - The PortMaster ignores DSR. Some PCs may require DSR high, but do not tie DSR to DTR.</p>
</blockquote>
<p>which the VT220 depends on:</p>
<blockquote>
<p>in modem control modes, transmits data only when RTS, CTS, DSR, and DTR are on.</p>
</blockquote>
<p>A workaround is to loop DTR to DSR, so that we can enable modem control on the VT220 and utilize hardware flow control with RTS/CTS. Even with only software flow control enabled, the VT220 performs flawlessly.</p>
<h2 id="rj-45-adapters">RJ-45 Adapters</h2>
<p>To easily extend the distance between the Portmaster and the terminals, we can use DB-25 to RJ-45 adapters and existing Cat5+ cabling. The Portmaster is a DTE device, so a rolled cable or null modem adapter is used to connect to another DTE device, such as the terminal.</p>
<p>The <a href="/resources/pdfs/portmaster2-configuration-guide.pdf">PortMaster Configuration Guide</a> contains the following table:</p>
<table>
<thead>
<tr>
<th>Pin</th>
<th>Description</th>
<th>Direction</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Transmit Data (TXD)</td>
<td>Output</td>
</tr>
<tr>
<td>3</td>
<td>Receive Data (RCD)</td>
<td>Input</td>
</tr>
<tr>
<td>4</td>
<td>Request to Send (RTS)</td>
<td>Output</td>
</tr>
<tr>
<td>5</td>
<td>Clear to Send (CTS)</td>
<td>Input</td>
</tr>
<tr>
<td>6</td>
<td>Data Set Ready (DSR)</td>
<td>Input</td>
</tr>
<tr>
<td>7</td>
<td>Signal Ground</td>
<td></td>
</tr>
<tr>
<td>8</td>
<td>Data Carrier Detect (DCD)</td>
<td>Input</td>
</tr>
<tr>
<td>20</td>
<td>Data Terminal Ready (DTR)</td>
<td>Output</td>
</tr>
</tbody>
</table>
<blockquote>
<p>A null-modem cable is used to connect a terminal (DTE) to a console port. A null-modem cable crosses pins 2 and 3, and 4 and 5, pin 7 is straight-through, and pins 6 and 8 are connected to pin 20.</p>
</blockquote>
<p>This is the standard null-modem translation found in adapters such as the <a href="https://www.l-com.com/images/downloadables/2D/DMA074MF_2D.pdf">L-com DMA074MF</a>:</p>
<table>
<thead>
<tr>
<th>Pin</th>
<th>Pin</th>
<th>Description</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>3</td>
<td>Transmit Data (TXD)</td>
<td>Receive Data (RCD)</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>Receive Data (RCD)</td>
<td>Transmit Data (TXD)</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>Request to Send (RTS)</td>
<td>Clear to Send (CTS)</td>
</tr>
<tr>
<td>5</td>
<td>4</td>
<td>Clear to Send (CTS)</td>
<td>Request to Send (RTS)</td>
</tr>
<tr>
<td>6</td>
<td>20</td>
<td>Data Set Ready (DSR)</td>
<td>Data Terminal Ready (DTR)</td>
</tr>
<tr>
<td>7</td>
<td>7</td>
<td>Signal Ground</td>
<td>Signal Ground</td>
</tr>
<tr>
<td>8</td>
<td>20</td>
<td>Data Carrier Detect (DCD)</td>
<td>Data Terminal Ready (DTR)</td>
</tr>
<tr>
<td>20</td>
<td>6</td>
<td>Data Terminal Ready (DTR)</td>
<td>Data Set Ready (DSR)</td>
</tr>
<tr>
<td>20</td>
<td>8</td>
<td>Data Terminal Ready (DTR)</td>
<td>Data Carrier Detect (DCD)</td>
</tr>
</tbody>
</table>
<p>To create a Cisco style RJ-45 adapter identical to CAB-500DTF<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> (see the <a href="https://www.cisco.com/c/en/us/support/docs/routers/7200-series-routers/12219-17.html">Serial Cable Connection Guide</a> and <a href="https://members.tripod.com/eric_hoffman/cables.html">Cabling Guide for RJ-45 Console and AUX Ports</a>), we can use this mapping:</p>
<table>
<thead>
<tr>
<th>RJ-45</th>
<th>DB-25</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>4</td>
<td>Request to Send (RTS)</td>
</tr>
<tr>
<td>2</td>
<td>20</td>
<td>Data Terminal Ready (DTR)</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>Transmit Data (TXD)</td>
</tr>
<tr>
<td>4</td>
<td>7</td>
<td>Signal Ground</td>
</tr>
<tr>
<td>5</td>
<td>7</td>
<td>Signal Ground</td>
</tr>
<tr>
<td>6</td>
<td>3</td>
<td>Receive Data (RCD)</td>
</tr>
<tr>
<td>7</td>
<td>6</td>
<td>Data Set Ready (DSR)</td>
</tr>
<tr>
<td>8</td>
<td>5</td>
<td>Clear to Send (CTS)</td>
</tr>
</tbody>
</table>
<p>DB-25 pin 8, Data Carrier Detect (DCD), is only used on modems (DCE) and irrelevant for DTE devices.</p>
<h2 id="dial-in">Dial in</h2>
<p>Now that we've experimented with direct serial connections, we can introduce a modem and dial in over telephone! The simplest option is to connect a modem to our Livingston Portmaster 2, connect another modem to our VT220, and connect both to a home telephone network. What could be simpler? Enter, the Lucent Portmaster 3.</p>
<p>At the core of the publicly switched telephone system was a synchronous digital network transmitting sampled PCM audio, this T-carrier system began with T1 which could support 24 telephone lines over a single twisted pair (1.5 mbit/s). Enterprises could purchase T1 or T3 (672 channels at 44.736 mbit/s) from AT&amp;T for phone lines and later data via ISDN. The last modems supported 56k speed downlinks, which utilized every available bit of the digital transmission (T-carrier used a bit-robbing system for status in the last bit of each 8-bit sample of the 8khz stream), and this is what the Lucent Portmaster 3 supports. It has two T1 line connections for up to 48 lines of 56k modem service -- no need for a separate modem rack.</p>
<p>My home phone system utilizes a Digium TE410 card to provide four T1 lines, two to an Adit 600 with FXS cards for individual POTS (subscriber) lines, and two to the Portmaster 3 for 56k modems. We can attach the VT220 to a physical Hayes modem over serial, connect it to a POTS line, and dial into the Lucent Portmaster 3.</p>
<figure class="graphviz">
<svg width="760pt" height="62pt" viewBox="0.00 0.00 759.50 62.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 58)"><polygon fill="white" stroke="none" points="-4,4 -4,-58 755.5,-58 755.5,4 -4,4"/><!-- vt220 --><g id="node1" class="node"><title>vt220</title><polygon fill="none" stroke="black" points="75.75,-54 0,-54 0,0 75.75,0 75.75,-54"/><text text-anchor="middle" x="37.88" y="-22.7" font-family="Times,serif" font-size="14.00">VT220</text></g><!-- modem --><g id="node2" class="node"><title>modem</title><polygon fill="none" stroke="black" points="273,-54 147,-54 147,0 273,0 273,-54"/><text text-anchor="middle" x="210" y="-22.7" font-family="Times,serif" font-size="14.00">Hayes Modem</text></g><!-- vt220&#45;&gt;modem -->
<g id="edge1" class="edge">
<title>vt220&#45;&gt;modem</title>
<path fill="none" stroke="black" d="M76.21,-27C93.62,-27 114.98,-27 135.44,-27"/>
<polygon fill="black" stroke="black" points="135.16,-30.5 145.16,-27 135.16,-23.5 135.16,-30.5"/>
<text text-anchor="middle" x="111.38" y="-31.7" font-family="Times,serif" font-size="14.00">Serial</text>
</g>
<!-- pbx -->
<g id="node3" class="node">
<title>pbx</title>
<polygon fill="none" stroke="black" points="405,-54 343.5,-54 343.5,0 405,0 405,-54"/>
<text text-anchor="middle" x="374.25" y="-22.7" font-family="Times,serif" font-size="14.00">PBX</text>
</g>
<!-- modem&#45;&gt;pbx -->
<g id="edge2" class="edge">
<title>modem&#45;&gt;pbx</title>
<path fill="none" stroke="black" d="M273.27,-27C292.82,-27 313.92,-27 331.67,-27"/>
<polygon fill="black" stroke="black" points="331.58,-30.5 341.58,-27 331.57,-23.5 331.58,-30.5"/>
<text text-anchor="middle" x="308.25" y="-31.7" font-family="Times,serif" font-size="14.00">POTS</text>
</g>
<!-- portmaster -->
<g id="node4" class="node">
<title>portmaster</title>
<polygon fill="none" stroke="black" points="618.75,-54 455.25,-54 455.25,0 618.75,0 618.75,-54"/>
<text text-anchor="middle" x="537" y="-22.7" font-family="Times,serif" font-size="14.00">Lucent Portmaster 3</text>
</g>
<!-- pbx&#45;&gt;portmaster -->
<g id="edge3" class="edge">
<title>pbx&#45;&gt;portmaster</title>
<path fill="none" stroke="black" d="M405.49,-27C416.68,-27 430.04,-27 443.87,-27"/>
<polygon fill="black" stroke="black" points="443.52,-30.5 453.52,-27 443.52,-23.5 443.52,-30.5"/>
<text text-anchor="middle" x="430.12" y="-31.7" font-family="Times,serif" font-size="14.00">T1</text>
</g>
<!-- vm -->
<g id="node5" class="node">
<title>vm</title>
<polygon fill="none" stroke="black" points="751.5,-54 695.25,-54 695.25,0 751.5,0 751.5,-54"/>
<text text-anchor="middle" x="723.38" y="-22.7" font-family="Times,serif" font-size="14.00">VM</text>
</g>
<!-- portmaster&#45;&gt;vm -->
<g id="edge4" class="edge">
<title>portmaster&#45;&gt;vm</title>
<path fill="none" stroke="black" d="M619.14,-27C641.59,-27 664.94,-27 683.73,-27"/>
<polygon fill="black" stroke="black" points="683.57,-30.5 693.57,-27 683.57,-23.5 683.57,-30.5"/>
<text text-anchor="middle" x="657" y="-31.7" font-family="Times,serif" font-size="14.00">Telnet</text>
</g>
</g>
</svg>
</figure>
<p>To be continued...</p>
<h2 id="links">Links</h2>
<p>Other resources I found along the way:</p>
<ul>
<li><a href="https://drewdevault.com/2016/03/22/Integrating-a-VT220-into-my-life.html">Integrating a VT220 into my life</a></li>
<li><a href="https://jstn.tumblr.com/post/8692501831">Justin's VT220 setup</a></li>
<li><a href="https://shuford.invisible-island.net/ibm_3151_setup_reset.txt">IBM 3151 Reset Procedure</a></li>
<li><a href="https://vt100.net/dec/ek-vt220-ug-002.pdf">VT220 Owner's Manual</a>, <a href="https://vt100.net/dec/ek-vt220-tm-001.pdf">VT220 Technical Manual</a></li>
<li><a href="https://docs.freebsd.org/en/books/handbook/serialcomms/">FreeBSD Handbook, Chapter 29: Serial Communications</a></li>
<li><a href="https://www.esva.net/~leo/subnet.html">How to Subnet a Class C Network with Livingston Portmasters</a></li>
</ul>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>The LK201 keyboard communicates at 4800 baud using 8N1 over an EIA (Electronic Industries Association) RS-423 interface. The four conductors of the RJ11 connector (commonly used by telephones) are right-to-left data out, power, ground, and data in. Note that the serial protocol only denotes how the data is represented on the wire, not the pin-out which is specified by the D-subminiature specifications for e.g. DE-9 and DB-25. The RS-422 port used by the Macintosh and Apple IIGS differs in that each pin has a dedicated return line to avoid a common ground, because these ports can be connected over long distances with LocalTalk.</p>
<ul>
<li><a href="https://web.archive.org/web/20180703165508/https://peterbjornx.nl/vtkbd/">A DEC LK201 Emulator by Peter Bjorn (archived)</a></li>
<li><a href="https://www.netbsd.org/docs/Hardware/Machines/DEC/lk201.html">NetBSD LK201 Documentation by Dan McMahill</a></li>
<li><a href="https://vt100.net/keyboard.html">LK201 Keyboard by Paul Williams</a></li>
</ul>
&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
<li id="fn:2">
<p>If <code>XOFF</code> is enabled, note that <code>irssi</code> blocks <code>^S</code>, so when in settings or during initial <code>tmux attach</code>, the VT220 will spam <code>S</code> to the output in an attempt to send <code>XOFF</code>. The <code>tmux attach</code> may actually never succeed if <code>irssi</code> is the active window when attaching.</p>
<p>I've opened <a href="https://github.com/irssi/irssi/issues/1547">issue #1547</a> after discussing it with the maintainer on IRC.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>See the <a href="https://manx-docs.org/collections/antonio/dec/MDS-1997-10/cd2/VOL002/0359.PDF">MUXserver 320 Hardware Installation Guide</a> for details on how the MUXserver 320 could be synchronously linked to MUXserver 300s, each of which could connect to up to 32 terminals.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>See <a href="https://lobste.rs/s/vpddbj/how_did_dial_up_isps_work#c_60ft2e">Joshua Stein's post on How Dial-Up ISPs Worked</a>; he does some incredible work on vintage Apples, for instance the <a href="https://jcs.org/wallops">Wallops IRC client</a> for System 6+ -- find him on Libera <code>#cyberpals</code>. For an example of how to run something like this at home, see <a href="https://www.w8dbm.com/dialup.html">Den's Dial-up Project</a>.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>The Cisco adapter for DCE devices, e.g. CAB-500DCM, has the following pinout:</p>
<table>
<thead>
<tr>
<th>RJ-45</th>
<th>DB-25</th>
<th>Color</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>5</td>
<td>Blue</td>
<td>Clear to Send (CTS)</td>
</tr>
<tr>
<td>2</td>
<td>8</td>
<td>Orange</td>
<td>Data Carrier Detect (DCD)</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>Black</td>
<td>Receive Data (RCD)</td>
</tr>
<tr>
<td>4</td>
<td>7</td>
<td>Red</td>
<td>Signal Ground</td>
</tr>
<tr>
<td>5</td>
<td>7</td>
<td>Green</td>
<td>Signal Ground</td>
</tr>
<tr>
<td>6</td>
<td>2</td>
<td>Yellow</td>
<td>Transmit Data (TXD)</td>
</tr>
<tr>
<td>7</td>
<td>20</td>
<td>Brown</td>
<td>Data Terminal Ready (DTR)</td>
</tr>
<tr>
<td>8</td>
<td>4</td>
<td>White</td>
<td>Request to Send (RTS)</td>
</tr>
</tbody>
</table>
&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2024-11-03-email2podcast</id>
    <title>From Newsletter to Podcast</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2024-11-03-email2podcast" />
    <published>2024-10-03T00:00:00-05:00</published>
    <summary>Generating an iTunes-compatible RSS feed from a newsletter with a linked audio recording</summary>
    
    <media:content url="https://connor.zip/resources/images/2024-11-03-email2podcasts/phone.jpg" medium="image" width="800" height="533"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Recently, I subscribed to a new newsletter and podcast called <a href="https://journalclub.io/">Journal Club</a>, a daily email in which Malcolm Diggs walking through a recently published paper related to the field of computer science -- often involving machine learning. It contains a transcript and links to an audio recording and the paper. Unfortunately, this isn't how I like to consume podcasts. Instead, I use the <a href="https://www.apple.com/apple-podcasts/">Apple Podcasts</a> app on my iPhone.</p>
<p>Is there a way to go from a series of emails in my iCloud account to an iTunes podcast?</p>
<figure>
<img src="/resources/images/2024-11-03-email2podcasts/phone.jpg" alt="Journal Club Podcast in Apple Podcasts on iPhone" />
<figcaption>Journal Club Podcast in Apple Podcasts on iPhone</figcaption>
</figure>
<h2 id="email">Email</h2>
<p>I use a <a href="https://support.apple.com/en-us/102540">custom domain</a> with iCloud mail to receive mail at <code>connor.zip</code> addresses. Since I don't control my mail server, I can't use existing filter languages like <a href="https://datatracker.ietf.org/doc/html/rfc5228">Sieve</a> to move or otherwise process emails.</p>
<h3 id="mailrules">MailRules</h3>
<p>Instead, I wrote a simple mail filtering utility which connects via IMAP and listens for new messages to process. <a href="https://github.com/cptaffe/mailrules"><code>mailrules</code></a> takes simple text rules such as:</p>
<pre><code>if to ~ &quot;^marketing[\\+\\.]&quot;
    then move &quot;Marketing&quot;;
</code></pre>
<p>This rule allows me to give out the address <code>marketing+llbean@connor.zip</code>, and when those emails arrive from any <code>From</code> address, they'll be delivered to the <code>Marketing</code> folder. Usually bogus email addresses would be returned to sender by iCloud, but with the <a href="https://support.apple.com/guide/icloud/allow-all-incoming-emails-mm9e3ee0680f/icloud">catch all</a> setting enabled they'll be delivered to my main address.</p>
<ul>
<li>When <code>mailrules</code> starts, it applies its rules to all emails the inbox.</li>
<li>Then, it waits for additional events such as incoming emails and applies any rules to that email.</li>
</ul>
<p>I use a <code>goyacc</code> generated <a href="https://github.com/cptaffe/mailrules/tree/main/parse">parser</a> to implement the rule language, which uses as input tokens from the lexer. The lexer is a modified version of Eli Bendersky's <a href="https://eli.thegreenplace.net/2022/a-faster-lexer-in-go/"><em>A Faster Lexer in Go</em></a><sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. The parser builds a list of rules from the input rules file, and matches those rules in order to each email based on the metadata fetched from the email server.</p>
<p>For instance, the above rule would become the tokens:</p>
<pre><code>IF IDENTIFIER TILDE QUOTE THEN MOVE QUOTE SEMICOLON
</code></pre>
<p>Let's walk through the relevant <code>yacc</code> <a href="https://github.com/cptaffe/mailrules/blob/main/parse/rules.y">rules</a>:</p>
<ul>
<li>We start at the <code>rules</code> rule, defined as either a single rule or a series of semicolon-delimited rules. Here we strip <code>SEMICOLON</code> off the end and <code>rule</code> must be <code>IF IDENTIFIER TILDE QUOTE THEN MOVE QUOTE</code>.
<pre><code>rules: rule SEMICOLON
    { $$ = append($$, $1) }
    ...
</code></pre>
</li>
<li>One of the options for a <code>rule</code> is an <code>if ... then</code> predicate followed by a <code>move</code> action. Here we break our tokens apart into a <code>IDENTIFIER TILDE QUOTE</code> and <code>MOVE QUOTE</code> to fill in the blanks.
<pre><code>rule: IF condition THEN move
    {
        $4.Predicate = $2
        $$ = $4
    }
    ...
</code></pre>
</li>
<li>Focusing on the latter part of the <code>if ... then</code>, the <code>move</code> is a simple keyword followed by a <code>string</code>. Notice its first argument, the predicate, is empty; it's assigned within the <code>if ... then</code> rule once the <code>condition</code> is resolved. With <code>MOVE</code> covered, <code>string</code> must be <code>QUOTE</code>.
<pre><code>move: MOVE string
    { $$ = rules.NewMoveRule(nil, $2) }
</code></pre>
</li>
<li>Within the <code>if ...  then</code>, the <code>condition</code> can be a simple <code>comparison</code>, or it can contain <code>and</code>, <code>or</code>, <code>not</code>, etc. We're still handling <code>IDENTIFIER TILDE QUOTE</code> at this point.
<pre><code>condition: comparison
    { $$ = $1 }
    ...
</code></pre>
</li>
<li>The <code>comparison</code> we use here is the <code>~</code> regular expression match. With <code>IDENTIFIER TILDE</code> covered, <code>string</code> must be <code>QUOTE</code>.
<pre><code>comparison:
    IDENTIFIER TILDE string
    {
        rexp, err := regexp.Compile($3)
        if err != nil {
            yylex.Error(fmt.Sprintf(&quot;malformed regex '%s' in predicate: %v&quot;, $3, err))
            return -1
        }
        $$, err = rules.NewFieldPredicate($1, rexp)
        if err != nil {
            yylex.Error(err.Error())
            return -1
        }
    }
    ...
</code></pre>
</li>
<li>And as we expect, <code>string</code> is a <code>QUOTE</code> atom where we've handled normalizing escaped quotes:
<pre><code>string: QUOTE
    { $$ = strings.ReplaceAll(strings.ReplaceAll($1[1:len($1)-1], &quot;\\\&quot;&quot;, &quot;\&quot;&quot;), &quot;\\\\&quot;, &quot;\\&quot;) }
</code></pre>
</li>
</ul>
<p>Which results in <code>[MoveRule(FieldPredicate(&quot;to&quot;, /^marketing[\+\.]/), &quot;Marketing&quot;)]</code>.</p>
<p>I then use the <a href="https://pkg.go.dev/github.com/emersion/go-imap@v1.2.1"><code>go-imap</code></a> package to interact with the mail server. First we fetch metadata from the input server, the following is condensed:</p>
<ul>
<li>Connect using TLS to our mail server</li>
<li>Login using our application credentials</li>
<li>Select the <code>INBOX</code> as our active mailbox</li>
<li>Use an infinite range to select all emails</li>
</ul>
<pre><code class="language-go">c, _ := client.DialTLS(&quot;imap.mail.me.com:993&quot;, nil)
c.Login(&quot;username&quot;, &quot;password&quot;)
mbox, _ := c.Select(&quot;INBOX&quot;, false)

// within processMailbox
seqset := new(imap.SeqSet)
seqset.AddRange(1, 0)
messages := make(chan *imap.Message, 10)
done := make(chan error, 1)
go func() {
    done &lt;- c.UidFetch(seqset, []imap.FetchItem{imap.FetchUid, imap.FetchEnvelope}, messages)
}()
</code></pre>
<p>We use UIDs instead of sequence numbers because the sequence number of a message will change if a message with a lower sequence number is moved out of the inbox, which can lead to strange behavior. The envelope contains just enough metadata to apply our rules, without pulling the entire body and attachments.</p>
<p>Then, we apply rules to each of the emails.</p>
<pre><code class="language-go">for msg := range messages {
    for _, rule := range rules {
        rule.Message(msg)
    }
}
</code></pre>
<p>The rules are then applied to <em>all</em> emails they matched in the order of the rules file:</p>
<pre><code class="language-go">for _, rule := range rules {
    err := rule.Action(c)
    if err != nil {
        log.Println(&quot;Apply rule:&quot;, err)
    }
}
</code></pre>
<p>After the first pass, we wait for additional emails regarding our mailbox and at that point re-process:</p>
<pre><code class="language-go">for {
    processMailbox(c, mbox, rules)

    log.Println(&quot;Listening...&quot;)

    // Create a channel to receive mailbox updates
    updates := make(chan client.Update)
    c.Updates = updates

    // Start idling
    stop := make(chan struct{})
    done := make(chan error, 1)
    go func() {
        done &lt;- c.Idle(stop, nil)
    }()

    // Listen for updates
    for {
        select {
        case update := &lt;-updates:
            switch update := update.(type) {
            case *client.MailboxUpdate:
                if update.Mailbox.Name != &quot;INBOX&quot; {
                    break
                }
                log.Println(&quot;Saw change to Inbox&quot;)

                // stop idling
                close(stop)
                close(updates)
                c.Updates = nil
            }
        case err := &lt;-done:
            if err != nil {
                log.Fatal(err)
            }
            goto Process
        }
    }
Process:
}
</code></pre>
<p>The rule keeps track of which messages matched and resets its internal state within <code>Action</code>. For instance, the <code>Message</code> match function for the <code>move</code> rule looks like:</p>
<pre><code class="language-go">func (r MoveRule) Message(msg *imap.Message) {
	if r.Predicate.MatchMessage(msg) {
		log.Printf(&quot;Moving '%s' to '%s'&quot;, msg.Envelope.Subject, r.Mailbox)
		r.messages.AddNum(msg.Uid)
	}
}
</code></pre>
<p>Here <code>r.messages</code> is an <code>imap.SeqSet</code>, which is used to represent a set of message UIDs. Also note that the predicate is pluggable and is swapped in by the parser matching logic based on whether the predicate is a simple regex or equivalence match or a more complex boolean logic statement.</p>
<h3 id="stream">Stream</h3>
<p>To keep <code>mailrules</code> a generic IMAP email processing tool, I added a new <code>stream</code> command, which can be plugged into any number of backends. The rule looks like this:</p>
<pre><code>if from ~ &quot;^members@journalclub.io$&quot;
    then stream rfc822 &quot;curl --silent --show-error --fail-with-body --header \&quot;Content-Type: message/rfc822\&quot; --header \&quot;Accept: application/json\&quot; --data-binary @- http://email2rss/journalclub/email&quot;;
</code></pre>
<p>When the <code>from</code> address matches our regular expression, this rule sends the entire <a href="https://datatracker.ietf.org/doc/html/rfc822">RFC 822</a> formatted email message into the input of the command provided. The command can be anything, in this case we use <a href="https://curl.se/"><code>curl</code></a> to send the body of the email to a sibling service running on the same Kubernetes cluster, <code>email2rss</code>.</p>
<p>To fetch the full representation of the email, <code>StreamRule</code>'s <code>Action</code> function:</p>
<ul>
<li>Initiates a Fetch using the UID set constructed in its <code>Message</code> matching logic, in which it asks for <code>UID</code>, <code>RFC822.HEADER</code>, and <code>RFC822.TEXT</code>.</li>
<li>Finds the header and text portions of the response for a given message and concatenates them together.</li>
<li>Executes the command with the stdin set to the message.</li>
</ul>
<p>Since this rule asks for the <code>rfc822</code> representation of a message instead of <code>html</code>, we don't attempt to parse the body of the message.</p>
<h2 id="podcasts">Podcasts</h2>
<p>Apple Podcasts supports ingesting RSS feeds as long as they meet its <a href="https://podcasters.apple.com/support/823-podcast-requirements">requirements</a>, which mostly involves the use of the <code>itunes</code> namespace and the recently standardized <a href="https://podcastnamespace.org/"><code>podcast</code> namespace</a>. See also Apple's <a href="https://help.apple.com/itc/podcasts_connect/#/itcb54353390">required tags</a> page and their <a href="https://help.apple.com/itc/podcasts_connect/#/itcbaf351599">sample feed</a>.</p>
<p>Here's an example of what we need to produce:</p>
<pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
&lt;rss version=&quot;2.0&quot;
  xmlns:atom=&quot;http://www.w3.org/2005/Atom&quot;
  xmlns:content=&quot;http://purl.org/rss/1.0/modules/content/&quot;
  xmlns:itunes=&quot;http://www.itunes.com/dtds/podcast-1.0.dtd&quot;
  xmlns:podcast=&quot;https://podcastindex.org/namespace/1.0&quot; &gt;
  &lt;channel&gt;
    &lt;title&gt;Journal Club&lt;/title&gt;
    &lt;link&gt;https://journalclub.io/&lt;/link&gt;
    &lt;atom:link href=&quot;{REDACTED}&quot; rel=&quot;self&quot; type=&quot;application/rss+xml&quot; /&gt;
    &lt;language&gt;en-us&lt;/language&gt;
    &lt;copyright&gt;&amp;#169; 2024 JournalClub.io&lt;/copyright&gt;
    &lt;itunes:author&gt;Journal Club&lt;/itunes:author&gt;
    &lt;description&gt; Journal Club is a premium daily newsletter and podcast authored and hosted by Malcolm Diggs. Each episode is lovingly crafted by hand, and delivered to your inbox every morning in text and audio form.&lt;/description&gt;
    &lt;itunes:image href=&quot;https://www.journalclub.io/cdn-cgi/image/width=1000/images/journals/journal-splash.png&quot;/&gt;
    &lt;itunes:category text=&quot;Science&quot; /&gt;
    &lt;itunes:explicit&gt;false&lt;/itunes:explicit&gt;
    &lt;item&gt;
        &lt;title&gt;Employing deep learning in crisis management and decision making through prediction using time series data in Mosul Dam Northern Iraq&lt;/title&gt;
        &lt;description&gt;
          &lt;![CDATA[
          &lt;p&gt;Today's article comes from the PeerJ Computer Science journal. The authors are Khafaji et al., from the University of Sfax, in Tunisia. In this paper they attempt to develop machine learning models that can predict the water-level fluctuations within a dam in Iraq. If they succeed, it will help the dam operators prevent a catastrophic collapse. Let's see how well they did.&lt;/p&gt;]]&gt;
        &lt;/description&gt;
        &lt;guid isPermaLink=&quot;false&quot;&gt;1b1dd75f-e37e-4c55-b759-dea3b1dbba3a&lt;/guid&gt;
        &lt;pubDate&gt;Sun, 03 Nov 2024 13:55:35 UTC&lt;/pubDate&gt;
        &lt;enclosure url=&quot;{REDACTED}&quot; length=&quot;12926609&quot; type=&quot;audio/mpeg&quot; /&gt;
        &lt;itunes:image href=&quot;https://embed.filekitcdn.com/e/3Uk7tL4uX5yjQZM3sj7FA5/sSM8ecFNXywfm7M3qy1tWu&quot; /&gt;
        &lt;itunes:explicit&gt;false&lt;/itunes:explicit&gt;
    &lt;/item&gt;
  &lt;/channel&gt;
&lt;/rss&gt;
</code></pre>
<p>Podcasts are <a href="https://www.rssboard.org/rss-specification">RSS 2.0</a> feeds, in 2023 Apple <a href="https://podcasters.apple.com/4115-technical-updates-for-hosting-providers">deprecated</a> the use of Atom feeds.</p>
<table>
<thead>
<tr>
<th>Object</th>
<th>Fields</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;title&gt;</code></td>
<td>The title of the feed, we use the name of the newsfeed.</td>
</tr>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;link&gt;</code></td>
<td>A link to the source of the information in the feed. Since this feed is based on an email newsletter and not a website, we use the homepage of the feed.</td>
</tr>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;atom:link&gt;</code></td>
<td>A self-link in the <code>atom</code> namespace, back-porting this feature from the Atom feed specification to RSS 2.0. We place the URL of the feed file itself here.</td>
</tr>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;language&gt;</code></td>
<td>The language of the content, in the same format of the <code>Accept-Language</code> HTTP header.</td>
</tr>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;copyright&gt;</code></td>
<td>Who owns the rights to the content in this file, we use the copyright statement from the homepage.</td>
</tr>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;itunes:author&gt;</code></td>
<td>This is the first of the <code>itunes</code> namespaced fields, the author of the content.</td>
</tr>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;description&gt;</code></td>
<td>A description of the content, we use the one available on the website.</td>
</tr>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;itunes:image&gt;</code></td>
<td>The image to use as cover art.</td>
</tr>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;itunes:category&gt;</code></td>
<td>The category of the podcast, this can also contain a subcategory.</td>
</tr>
<tr>
<td><code>&lt;channel&gt;</code></td>
<td><code>&lt;itunes:explicit&gt;</code></td>
<td>Whether or not this podcast contains explicit content.</td>
</tr>
<tr>
<td><code>&lt;item&gt;</code></td>
<td><code>&lt;title&gt;</code></td>
<td>The title of a podcast episode, extracted from the <code>Subject</code> line of the email.</td>
</tr>
<tr>
<td><code>&lt;item&gt;</code></td>
<td><code>&lt;description&gt;</code></td>
<td>A description of the podcast episode, taken from the first paragraph of the body of the email. This field can contain HTML tags such as paragraphs and links by using a <code>CDATA</code> block.</td>
</tr>
<tr>
<td><code>&lt;item&gt;</code></td>
<td><code>&lt;guid&gt;</code></td>
<td>A globally unique id, we use <code>X-Apple-UUID</code> so we must set <code>isPermaLink</code> to false since it's not a URL to the content.</td>
</tr>
<tr>
<td><code>&lt;item&gt;</code></td>
<td><code>&lt;pubDate&gt;</code></td>
<td>The date the podcast episode was published, we use the <code>Date</code> field from the email. This won't work in the case of back-dated episodes, for instance JournalClub has a mechanism to resend old episodes and those emails would have a renewed send date.</td>
</tr>
<tr>
<td><code>&lt;item&gt;</code></td>
<td><code>&lt;enclosure&gt;</code></td>
<td>The audio of the podcast episode, a URL along with its MIME type and file size.</td>
</tr>
<tr>
<td><code>&lt;item&gt;</code></td>
<td><code>&lt;itunes:image&gt;</code></td>
<td>The image to use for a specific podcast episode. We use the paper image, but it's so small Apple's podcast app ignores it.</td>
</tr>
<tr>
<td><code>&lt;item&gt;</code></td>
<td><code>&lt;itunes:explicit&gt;</code></td>
<td>Whether this particular episode is explicit.</td>
</tr>
</tbody>
</table>
<p>We can use the <a href="https://validator.w3.org/feed/">W3C Feed Validation Service</a> and the <a href="https://podba.se/validate/">Podbase Podcast Validator</a> for podcast-specific validation.</p>
<h3 id="email2rss">Email2RSS</h3>
<p>At this point, <code>mailrules</code> has shelled out to <code>curl</code> which has sent the body of our Journal Club email to a sibling <a href="https://github.com/cptaffe/email2rss"><code>email2rss</code></a> service, in the same Kubernetes cluster which <code>mailrules</code> is deployed within. This service has two relevant endpoints:</p>
<ul>
<li><code>GET /{feed}/feed.xml</code> which fetches the generated Podcast RSS.</li>
<li><code>POST /{feed}/email</code> which accepts an RFC 822 formatted email and updates the Podcast RSS.</li>
</ul>
<h4 id="post-feedemail"><code>POST /{feed}/email</code></h4>
<p>The <code>POST</code> endpoint needs to first parse the input email to find the HTML representation we'll be pulling relevant information from. To do that, we first need to parse the RFC 822 message body using Go's <code>net/mail</code> package using <code>mail.ReadMessage(req.Body)</code>. Then, we extract the <code>msg.Header.Date()</code> which (RFC 3339 formatted) becomes the key for our state in cloud storage.</p>
<h5 id="mimemime">MIME<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup></h5>
<p>To find the HTML, we use the <a href="https://github.com/cptaffe/email2rss/blob/246d6d04563fa3d886afcd35d40f7ed0c6799565/internal/email/email.go#L15"><code>MessageMIME</code></a> method:</p>
<pre><code class="language-go">// MessageMIME finds and parses a portion of the message based on the MIME type
func MessageMIME(message *mail.Message, contentType string) (io.Reader, error) {
	mediaType, params, err := mime.ParseMediaType(message.Header.Get(&quot;Content-Type&quot;))
	if err != nil {
		return nil, fmt.Errorf(&quot;parse message content type: %w&quot;, err)
	}
	if !strings.HasPrefix(mediaType, &quot;multipart/&quot;) {
		return nil, fmt.Errorf(&quot;expected multipart message but found %s&quot;, mediaType)
	}
	reader := multipart.NewReader(message.Body, params[&quot;boundary&quot;])
	if reader == nil {
		return nil, fmt.Errorf(&quot;could not construct multipart reader for message&quot;)
	}
	for {
		part, err := reader.NextPart()
		if err != nil {
			return nil, fmt.Errorf(&quot;could not find %s part of message: %w&quot;, contentType, err)
		}
		mediaType, _, err := mime.ParseMediaType(part.Header.Get(&quot;Content-Type&quot;))
		if err != nil {
			return nil, fmt.Errorf(&quot;parse multipart message part content type: %w&quot;, err)
		}
		if mediaType == contentType {
			enc := strings.ToLower(part.Header.Get(&quot;Content-Transfer-Encoding&quot;))
			switch enc {
			case &quot;base64&quot;:
				return base64.NewDecoder(base64.StdEncoding, part), nil
			case &quot;quoted-printable&quot;:
				return quotedprintable.NewReader(part), nil
			default:
				return part, nil
			}
		}
	}
}
</code></pre>
<p>The method parses the <code>Content-Type</code> header of the message to determine if it is <code>multipart/</code>, if so we need to determine the <code>boundary</code> string used to split each portion and iterate through each of the multiple parts using a <code>multipart.Reader</code>. As we iterate over each part, we again parse the <code>Content-Type</code> looking for our target <code>text/html</code>. Each of these message parts could be another multipart message (in which case we could recurse) or even an entire email (<code>message/rfc822</code>); but for our purposes we only expect a single level in the tree. Once we've found the appropriate portion, we check <code>Content-Transfer-Encoding</code>; in our case the email is <code>quoted-printable</code><sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> encoded, which looks like:</p>
<pre><code>&lt;h3 style=3D&quot;font-weight:bold;font-style:normal;font-size:1em;margin:0;font=
-size:1.17em;margin:1em 0;font-family:Charter, Georgia, Times New Roman, se=
rif;font-size:28px;color:#12363f;font-weight:400;letter-spacing:0;line-heig=
ht:1.5;text-transform:none;margin-top:0;margin-bottom:0&quot; class=3D&quot;&quot;&gt;Employi=
ng deep learning in crisis management and decision making through predictio=
n using time series data in Mosul Dam Northern Iraq&lt;/h3&gt;
</code></pre>
<p>Notice the trailing <code>=</code> for soft line-breaks and <code>=3D</code> to encode literal equal signs.</p>
<p>I may rewrite this in the future to leverage a more general library like <a href="https://pkg.go.dev/github.com/emersion/go-message"><code>go-message</code></a>.</p>
<h5 id="parsing-the-html">Parsing the HTML</h5>
<p>Next, we extract information form the message using regular expressions<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>:</p>
<table>
<thead>
<tr>
<th>Expression</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>&quot;(https?://[^ ]+\.mp3)&quot;</code></td>
<td>The audio recording link included in each email, which becomes the <code>&lt;enclosure&gt;</code> <code>url</code> field.</td>
</tr>
<tr>
<td><code>&lt;img src=&quot;(https?://[^ ]*)&quot;</code></td>
<td>An image of the first page of the paper we can use as the podcast episode <code>&lt;itunes:image&gt;</code>. Unfortunately this is too low-resolution to be used by the Podcast app.</td>
</tr>
<tr>
<td><code>Hi[ ]+Connor, (.*)&lt;/p&gt;</code></td>
<td>The <code>&lt;description&gt;</code> of each episode, which begins with a salutation specific to each subscriber.</td>
</tr>
<tr>
<td><code>&lt;a [^&gt;]*href=&quot;(https?://(\w+\.)?doi.org[^&quot;]*)&quot;[^&gt;]*&gt;</code></td>
<td>The link to the paper, using the <a href="https://www.doi.org/">DOI</a>. This becomes part of the <code>&lt;description&gt;</code>.</td>
</tr>
</tbody>
</table>
<p>Apple requires the <code>&lt;enclosure&gt;</code> <code>length</code> field contain the number of bytes within the file, so we send a <code>HEAD</code> request to the audio URL and record the <code>Content-Length</code> to populate this field.</p>
<p>We also extract some information from headers:</p>
<table>
<thead>
<tr>
<th>Header</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>Subject</code></td>
<td>Populates <code>&lt;title&gt;</code>, but needs UTF-8 characters decoded according to RFC 2047 using <code>mime.WordDecoder</code>'s <code>DecodeHeader</code></td>
</tr>
<tr>
<td><code>Date</code></td>
<td>Used for the file name in cloud storage.</td>
</tr>
<tr>
<td><code>X-Apple-UUID</code></td>
<td>Used for the <code>&lt;guid&gt;</code> tag.</td>
</tr>
</tbody>
</table>
<p>Once we've extracted this information, we format our state into a JSON blob and write it to storage:</p>
<pre><code class="language-json">{
  &quot;uuid&quot;: &quot;1b1dd75f-e37e-4c55-b759-dea3b1dbba3a&quot;,
  &quot;subject&quot;: &quot;Employing deep learning in crisis management and decision making through prediction using time series data in Mosul Dam Northern Iraq&quot;,
  &quot;description&quot;: &quot;Today's article comes from the PeerJ Computer Science journal. The authors are Khafaji et al., from the University of Sfax, in Tunisia. In this paper they attempt to develop machine learning models that can predict the water-level fluctuations within a dam in Iraq. If they succeed, it will help the dam operators prevent a catastrophic collapse. Let's see how well they did.&quot;,
  &quot;date&quot;: &quot;2024-11-03T13:55:35Z&quot;,
  &quot;imageURL&quot;: &quot;https://embed.filekitcdn.com/e/3Uk7tL4uX5yjQZM3sj7FA5/sSM8ecFNXywfm7M3qy1tWu&quot;,
  &quot;audioURL&quot;: &quot;{REDACTED}&quot;,
  &quot;audioSize&quot;: 12926609,
  &quot;paperURL&quot;: &quot;http://dx.doi.org/10.7717/peerj-cs.2416&quot;
}
</code></pre>
<p>Then, all state files are read in from cloud storage and passed through a template to generate the new Podcast RSS feed:</p>
<pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
&lt;rss version=&quot;2.0&quot;
  xmlns:atom=&quot;http://www.w3.org/2005/Atom&quot;
  xmlns:content=&quot;http://purl.org/rss/1.0/modules/content/&quot;
  xmlns:itunes=&quot;http://www.itunes.com/dtds/podcast-1.0.dtd&quot;
  xmlns:podcast=&quot;https://podcastindex.org/namespace/1.0&quot; &gt;
  &lt;channel&gt;
    &lt;title&gt;Journal Club&lt;/title&gt;
    &lt;link&gt;https://journalclub.io/&lt;/link&gt;
    &lt;atom:link href=&quot;{REDACTED}&quot; rel=&quot;self&quot; type=&quot;application/rss+xml&quot; /&gt;
    &lt;language&gt;en-us&lt;/language&gt;
    &lt;copyright&gt;&amp;#169; 2024 JournalClub.io&lt;/copyright&gt;
    &lt;itunes:author&gt;Journal Club&lt;/itunes:author&gt;
    &lt;description&gt; Journal Club is a premium daily newsletter and podcast authored and hosted by Malcolm Diggs. Each episode is lovingly crafted by hand, and delivered to your inbox every morning in text and audio form.&lt;/description&gt;
    &lt;itunes:image href=&quot;https://www.journalclub.io/cdn-cgi/image/width=1000/images/journals/journal-splash.png&quot;/&gt;
    &lt;itunes:category text=&quot;Science&quot; /&gt;
    &lt;itunes:explicit&gt;false&lt;/itunes:explicit&gt;
    {{- range . }}
    &lt;item&gt;
        &lt;title&gt;{{.Subject}}&lt;/title&gt;
        &lt;description&gt;
          &lt;![CDATA[
          &lt;p&gt;{{- .Description -}}&lt;/p&gt;
          {{- if .PaperURL -}}
            &lt;p&gt;Want the paper? This &lt;a href=&quot;{{.PaperURL}}&quot;&gt;link&lt;/a&gt; will take you to the original DOI for the paper (on the publisher's site). You'll be able to grab the PDF from them directly.&lt;/p&gt;
          {{- end -}}
          ]]&gt;
        &lt;/description&gt;
        &lt;guid isPermaLink=&quot;false&quot;&gt;{{.UUID}}&lt;/guid&gt;
        &lt;pubDate&gt;{{ rfc2822 .Date }}&lt;/pubDate&gt;
        &lt;enclosure
            url=&quot;{{.AudioURL}}&quot;
            length=&quot;{{.AudioSize}}&quot;
            type=&quot;audio/mpeg&quot;
            /&gt;
        &lt;itunes:image href=&quot;{{.ImageURL}}&quot; /&gt;
        &lt;itunes:explicit&gt;false&lt;/itunes:explicit&gt;
    &lt;/item&gt;
    {{- end }}
  &lt;/channel&gt;
&lt;/rss&gt;
</code></pre>
<p>And the final output is cached in cloud storage.</p>
<h4 id="get-feedfeedxml"><code>GET /{feed}/feed.xml</code></h4>
<p>Using the portable <a href="https://pkg.go.dev/gocloud.dev@v0.40.0/blob">blob</a> package, we can avoid coupling ourselves to a specific cloud storage backend, and even write tests using a <code>mem</code> or <code>file</code> backend. Then, we use <code>http.ServeContent</code> to handle the finicky logic around <code>Last-Mofied</code>/<code>If-Modified-Since</code> and friends. Here's the implementation of <a href="https://github.com/cptaffe/email2rss/blob/246d6d04563fa3d886afcd35d40f7ed0c6799565/internal/server/server.go#L48"><code>GetFeed</code></a>:</p>
<pre><code class="language-go">func (s *Server) GetFeed(w http.ResponseWriter, req *http.Request) {
	ctx := req.Context()
	key := fmt.Sprintf(&quot;%s/feed.xml&quot;, req.PathValue(&quot;feed&quot;))
	attrs, err := s.bucket.Attributes(ctx, key)
    if err != nil {
		http.Error(w, &quot;Could not fetch feed attributes&quot;, http.StatusInternalServerError)
		log.Printf(&quot;fetch object attributes: %v&quot;, err)
		return
	}
	blobReader, err := s.bucket.NewReader(ctx, key, nil)
	if err != nil {
		http.Error(w, &quot;Could not fetch feed&quot;, http.StatusInternalServerError)
		log.Printf(&quot;construct object reader: %v&quot;, err)
		return
	}
	defer blobReader.Close()

	w.Header().Add(&quot;Content-Type&quot;, &quot;application/xml+rss;charset=UTF-8&quot;)
	w.Header().Add(&quot;Content-Disposition&quot;, &quot;inline&quot;)
	w.Header().Add(&quot;Cache-Control&quot;, &quot;no-cache&quot;)
	w.Header().Add(&quot;ETag&quot;, attrs.ETag)
	http.ServeContent(w, req, &quot;&quot;, blobReader.ModTime(), blobReader)
}
</code></pre>
<p>We can finally put the entire flow together:</p>
<figure class="graphviz">
<svg width="526pt" height="456pt" viewBox="0.00 0.00 525.63 456.08" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 452.08)"><polygon fill="white" stroke="none" points="-4,4 -4,-452.08 521.63,-452.08 521.63,4 -4,4"/><g id="clust1" class="cluster"><title>cluster_k8s</title><polygon fill="none" stroke="black" points="136,-82 136,-439.69 388,-439.69 388,-82 136,-82"/><text text-anchor="middle" x="262" y="-422.39" font-family="Times,serif" font-size="14.00">Kubernetes</text></g><g id="clust2" class="cluster"><title>cluster_mailrules</title><polygon fill="none" stroke="black" points="238,-172.77 238,-342.31 380,-342.31 380,-172.77 238,-172.77"/><text text-anchor="middle" x="309" y="-325.01" font-family="Times,serif" font-size="14.00">mailrules</text>
</g>
<g id="clust3" class="cluster">
<title>cluster_iphone</title>
<polygon fill="none" stroke="black" points="8,-361.31 8,-440.08 128,-440.08 128,-361.31 8,-361.31"/>
<text text-anchor="middle" x="68" y="-422.78" font-family="Times,serif" font-size="14.00">iPhone</text>
</g>
<!-- mailserver -->
<g id="node1" class="node">
<title>mailserver</title>
<ellipse fill="none" stroke="black" cx="457" cy="-199.15" rx="60.63" ry="18.38"/>
<text text-anchor="middle" x="457" y="-194.85" font-family="Times,serif" font-size="14.00">Mail Server</text>
</g>
<!-- rules -->
<g id="node2" class="node">
<title>rules</title>
<polygon fill="none" stroke="black" points="351,-405.69 303,-405.69 303,-369.69 357,-369.69 357,-399.69 351,-405.69"/>
<polyline fill="none" stroke="black" points="351,-405.69 351,-399.69"/>
<polyline fill="none" stroke="black" points="357,-399.69 351,-399.69"/>
<text text-anchor="middle" x="330" y="-383.39" font-family="Times,serif" font-size="14.00">Rules</text>
</g>
<!-- mailrules_loop -->
<g id="node5" class="node">
<title>mailrules_loop</title>
<ellipse fill="none" stroke="black" cx="330" cy="-289.92" rx="33.59" ry="18.38"/>
<text text-anchor="middle" x="330" y="-285.62" font-family="Times,serif" font-size="14.00">Loop</text>
</g>
<!-- rules&#45;&gt;mailrules_loop -->
<g id="edge2" class="edge">
<title>rules&#45;&gt;mailrules_loop</title>
<path fill="none" stroke="black" d="M330,-369.57C330,-355.72 330,-335.91 330,-319.63"/>
<polygon fill="black" stroke="black" points="333.5,-320.06 330,-310.06 326.5,-320.06 333.5,-320.06"/>
</g>
<!-- email2rss -->
<g id="node3" class="node">
<title>email2rss</title>
<ellipse fill="none" stroke="black" cx="235" cy="-108.38" rx="54.8" ry="18.38"/>
<text text-anchor="middle" x="235" y="-104.08" font-family="Times,serif" font-size="14.00">email2rss</text>
</g>
<!-- gcs -->
<g id="node9" class="node">
<title>gcs</title>
<path fill="none" stroke="black" d="M287.62,-32.73C287.62,-34.53 264.04,-36 235,-36 205.96,-36 182.38,-34.53 182.38,-32.73 182.38,-32.73 182.38,-3.27 182.38,-3.27 182.38,-1.47 205.96,0 235,0 264.04,0 287.62,-1.47 287.62,-3.27 287.62,-3.27 287.62,-32.73 287.62,-32.73"/>
<path fill="none" stroke="black" d="M287.62,-32.73C287.62,-30.92 264.04,-29.45 235,-29.45 205.96,-29.45 182.38,-30.92 182.38,-32.73"/>
<text text-anchor="middle" x="235" y="-13.7" font-family="Times,serif" font-size="14.00">Cloud Storage</text>
</g>
<!-- email2rss&#45;&gt;gcs -->
<g id="edge8" class="edge">
<title>email2rss&#45;&gt;gcs</title>
<path fill="none" stroke="black" d="M235,-78.47C235,-68.6 235,-57.57 235,-47.71"/>
<polygon fill="black" stroke="black" points="231.5,-78.41 235,-88.41 238.5,-78.41 231.5,-78.41"/>
<polygon fill="black" stroke="black" points="238.5,-47.94 235,-37.94 231.5,-47.94 238.5,-47.94"/>
<text text-anchor="middle" x="251.12" y="-58.7" font-family="Times,serif" font-size="14.00">state</text>
</g>
<!-- traefik -->
<g id="node4" class="node">
<title>traefik</title>
<ellipse fill="none" stroke="black" cx="186" cy="-199.15" rx="42.07" ry="18.38"/>
<text text-anchor="middle" x="186" y="-194.85" font-family="Times,serif" font-size="14.00">Traefik</text>
</g>
<!-- traefik&#45;&gt;email2rss -->
<g id="edge3" class="edge">
<title>traefik&#45;&gt;email2rss</title>
<path fill="none" stroke="black" d="M195.45,-181.04C202.39,-168.47 211.95,-151.14 219.94,-136.67"/>
<polygon fill="black" stroke="black" points="222.91,-138.52 224.68,-128.08 216.79,-135.14 222.91,-138.52"/>
<text text-anchor="middle" x="226.89" y="-149.47" font-family="Times,serif" font-size="14.00">RSS</text>
</g>
<!-- mailrules_loop&#45;&gt;mailserver -->
<g id="edge5" class="edge">
<title>mailrules_loop&#45;&gt;mailserver</title>
<path fill="none" stroke="black" d="M350.12,-274.86C370.04,-260.94 400.86,-239.39 424.3,-223.01"/>
<polygon fill="black" stroke="black" points="426.09,-226.03 432.28,-217.43 422.08,-220.29 426.09,-226.03"/>
<text text-anchor="middle" x="421.38" y="-240.24" font-family="Times,serif" font-size="14.00">IMAP</text>
</g>
<!-- stream -->
<g id="node6" class="node">
<title>stream</title>
<ellipse fill="none" stroke="black" cx="309" cy="-199.15" rx="63.29" ry="18.38"/>
<text text-anchor="middle" x="309" y="-194.85" font-family="Times,serif" font-size="14.00">StreamRule</text>
</g>
<!-- mailrules_loop&#45;&gt;stream -->
<g id="edge1" class="edge">
<title>mailrules_loop&#45;&gt;stream</title>
<path fill="none" stroke="black" d="M325.85,-271.39C322.99,-259.31 319.13,-242.98 315.82,-229"/>
<polygon fill="black" stroke="black" points="319.27,-228.36 313.56,-219.43 312.46,-229.97 319.27,-228.36"/>
<text text-anchor="middle" x="345.38" y="-240.24" font-family="Times,serif" font-size="14.00">Invokes</text>
</g>
<!-- stream&#45;&gt;email2rss -->
<g id="edge4" class="edge">
<title>stream&#45;&gt;email2rss</title>
<path fill="none" stroke="black" d="M294.73,-181.04C283.88,-168.02 268.77,-149.89 256.46,-135.13"/>
<polygon fill="black" stroke="black" points="259.28,-133.05 250.19,-127.61 253.91,-137.53 259.28,-133.05"/>
<text text-anchor="middle" x="304.88" y="-149.47" font-family="Times,serif" font-size="14.00">RFC 822</text>
</g>
<!-- podcasts -->
<g id="node7" class="node">
<title>podcasts</title>
<ellipse fill="none" stroke="black" cx="68" cy="-387.69" rx="52.15" ry="18.38"/>
<text text-anchor="middle" x="68" y="-383.39" font-family="Times,serif" font-size="14.00">Podcasts</text>
</g>
<!-- pfsense -->
<g id="node8" class="node">
<title>pfsense</title>
<ellipse fill="none" stroke="black" cx="77" cy="-289.92" rx="46.85" ry="18.38"/>
<text text-anchor="middle" x="77" y="-285.62" font-family="Times,serif" font-size="14.00">pfSense</text>
</g>
<!-- podcasts&#45;&gt;pfsense -->
<g id="edge7" class="edge">
<title>podcasts&#45;&gt;pfsense</title>
<path fill="none" stroke="black" d="M69.65,-369.13C70.94,-355.44 72.75,-336.15 74.25,-320.15"/>
<polygon fill="black" stroke="black" points="77.74,-320.49 75.19,-310.21 70.77,-319.84 77.74,-320.49"/>
</g>
<!-- pfsense&#45;&gt;traefik -->
<g id="edge6" class="edge">
<title>pfsense&#45;&gt;traefik</title>
<path fill="none" stroke="black" d="M96.74,-272.85C113.82,-258.94 138.72,-238.66 157.89,-223.05"/>
<polygon fill="black" stroke="black" points="160.06,-225.8 165.6,-216.77 155.64,-220.37 160.06,-225.8"/>
</g>
</g>
</svg>
</figure>
<h3 id="using">Using</h3>
<p>Once the <code>email2rss</code>'s <code>GET /{feed}/feed.xml</code> endpoint has been published to the internet, and <code>mailrules</code> is sending emails to the service to populate the RSS feed; we can finally open the Podcasts app and see what we've accomplished.</p>
<ul>
<li>On mobile, navigate in the Apple Podcasts app on iPhone to Library &gt; ..., then Follow a Show by URI..., then paste the full URL of our <code>feed.xml</code> endpoint.</li>
<li>On desktop, navigate to File &gt; Follow a Show by URI... (or command+shift+N).</li>
</ul>
<figure>
<img src="/resources/images/2024-11-03-email2podcasts/desktop.png" alt="Journal Club Podcast in Apple Podcasts on Mac" />
<figcaption>Journal Club Podcast in Apple Podcasts on Mac</figcaption>
</figure>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>See also <a href="https://www.youtube.com/watch?v=HxaD_trXwRE"><em>Lexical Scanning in Go</em></a> by Rob Pike, which I've used as the basis for several previous projects involving lexers.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>MIME was introduced as an email standard in 1992, see a brief history in <a href="https://www.networkworld.com/article/719139/uc-voip-the-mime-guys-how-two-internet-gurus-changed-e-mail-forever.html"><em>The MIME guys: How two Internet gurus changed e-mail forever</em></a>. Its use in the Web through <code>Content-Type</code> and later <code>Accept</code> has not been with <a href="https://www.ietf.org/archive/id/draft-masinter-mime-web-info-01.html">hiccups</a> necessitating that browsers sniff content and even a <a href="https://mimesniff.spec.whatwg.org/">MIME Sniffing</a> standard.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Email predates the Internet, and is always ASCII encoded. ASCII was introduced as a 7-bit standard for telegraphs in the 60s, so to represent 8-bit UTF-8 characters, we need a way to encode the 8-bit characters into 7-bit ASCII. The <code>quoted-printable</code> scheme (<a href="https://datatracker.ietf.org/doc/html/rfc2045#section-6.7">RFC 2045 §6.7</a>) does this by using an <code>=</code> sign followed by two hex digits. In the case of a literal equals sign in the original text, it must be encoded <code>=3D</code>, <code>3D</code> being the hex for the ASCII code for <code>=</code>. Quoted printable also requires that lines be a maximum of 76 characters long, if the original text is a longer a <em>soft</em> line break can be inserted which results in a <code>=</code> before the <code>\r\n</code> sequence. The only mention of line length in RFC 822 is when referencing long headers:</p>
<blockquote>
<p>&quot;Long&quot; is commonly interpreted  to  mean greater than 65 or 72 characters.  The former length serves as a limit, when the message is to be viewed  on most  simple terminals which use simple display software; however, the limit is not imposed by this standard.</p>
</blockquote>
<p>Long ago, email was delivered on UNIX machines using UUCP (UNIX-to-UNIX Copy), to user-specific mailboxes and viewed with a command such as <code>mail</code>. In fact email addresses looked much different before DNS, for instance a UUCP <em>bang path</em> representing the full route to the sender or recipient. As an example, Brian Ried's essay on Interpress was sent from the address <code>decwrl!glacier!reid</code>.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>These were ported from my first attempt at a solution, which involved integrating the HTML parser into <code>mailrules</code> and generating <code>item.xml</code> intermediate files which were globbed into a final <code>feed.xml</code> all via shell script. The <code>html</code> command input is still an <a href="https://github.com/cptaffe/mailrules/blob/main/rules/rules.go#L342">option</a> for the <code>stream</code> rule in place of <code>rfc822</code>. I then used <code>gsutil rsync</code> to copy files from the <code>mailrules</code> pod shell script workspace to cloud storage, and a simple static server container using <a href="https://coral.googlesource.com/busybox/+/refs/tags/1_28_1/networking/httpd.c">BusyBox's <code>httpd</code></a> and the same <code>gsutil rsync</code> in an init container and in a loop as a sidecar using a <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/">shared empty volume</a>.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2024-05-20-analytics-with-plausible</id>
    <title>Analytics with Plausible</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2024-05-20-analytics-with-plausible" />
    <published>2024-05-20T20:30:00-05:00</published>
    <summary>Self-hosting a Google Analytics alternative</summary>
    
    <media:content url="https://connor.zip/resources/images/2024-05-20-analytics-with-plausible/plausible.png" medium="image" width="599" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>While server-side logs are the most accurate information on who is accessing what on your site, client side analytics can compliment this by focusing only on browsers which have JavaScript enabled as opposed to any other client.</p>
<p>This blog is an exercise in creating a site from scratch, both code and infrastructure, so Google Analytics is off the table. I'd also like to avoid readers having to load scripts from a third party site, or give that tool access to my readers' behavior. <a href="https://plausible.io/">Plausible</a> is an open source solution that I can run myself, where I retain control over all analytics and how they are used.</p>
<p>I've been running Plausible for over a week now, here's what I see when I navigate to my instance:</p>
<figure>
<img src="/resources/images/2024-05-20-analytics-with-plausible/plausible.png" alt="Plausible Site View" />
<figcaption>Plausible Site View</figcaption>
</figure>
<p>I can quickly see where readers are connecting from geographically, including some countries I wouldn't expect like Russia, China, and Germany; where readers find my content, which is unsurprisingly Google but also some locally popular search engines; and which are my most popular pages, which by far is my article on <a href="/posts/2023-06-08-airprint-with-cups/">AirPrint with CUPS</a>. This is all information I didn't have prior to setting up Plausible -- it should be available in the logs, but processing logs is something I haven't taken a whack at <em>yet</em>.</p>
<p>Below I detail how I set up Plausible in my environment. Plausible includes helpful materials in the Community Edition <a href="https://github.com/plausible/community-edition?tab=readme-ov-file#install">repo</a> including a Docker Compose file I based my Kubernetes configuration on.</p>
<h2 id="installation">Installation</h2>
<p>Before we start, you'll need to install PostgreSQL and ClickHouse in your environment.
As I haven't yet solved the problem of persistent volumes on Kubernetes at home, I use a dedicated Fedora Linux 38 VM at <code>db.home.arpa</code> running on VMWare to host these databases.</p>
<figure>
<img src="/resources/images/2024-05-20-analytics-with-plausible/vmware.png" alt="VMWare Web UI" />
<figcaption>VMWare Web UI</figcaption>
</figure>
<h3 id="install-postgresql">Install PostgreSQL</h3>
<ol>
<li>
<p>On our database instance, install PostgreSQL:</p>
<pre><code class="language-sh">; sudo dnf install postgresql-server postgresql-contrib
; sudo postgresql-setup --initdb --unit postgresql
; sudo systemctl enable --now postgresql
</code></pre>
</li>
<li>
<p>Now connect and optionally create a user and database for yourself, so you can login with your own user on the VM.
The user name must match your Linux user name for <code>peer</code> authentication to succeed.</p>
<pre><code class="language-sh">; whoami
cptaffe
; sudo -u postgres psql
psql (15.4)
Type &quot;help&quot; for help.

postgres=# CREATE ROLE cptaffe LOGIN;
postgres=# CREATE DATABASE cptaffe;
</code></pre>
</li>
<li>
<p>Login to that new user, and optionally set a password for connecting over the network:</p>
<pre><code class="language-sh">; psql
psql (15.4)
Type &quot;help&quot; for help.

cptaffe=&gt; \password cptaffe
</code></pre>
<p>Save this password in a password manager.</p>
</li>
<li>
<p>Edit your file <code>var/lib/pgsql/data/pg_hba.conf</code> to control access, mine looks like:</p>
<pre><code># TYPE  DATABASE        USER            ADDRESS                 METHOD

# &quot;local&quot; is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             127.0.0.1/32            ident
# IPv6 local connections:
host    all             all             ::1/128                 ident
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     peer
host    replication     all             127.0.0.1/32            ident
host    replication     all             ::1/128                 ident
host    all             all             samenet                 scram-sha-256
</code></pre>
<p>The important line here is the last one, which enables connections over the network to all users and databases, but only from hosts on the same network and they must authenticate using <code>scram-sha-256</code>.</p>
<pre><code class="language-sh">; sudo systemctl restart postgresql.service
</code></pre>
</li>
<li>
<p>Add a firewall rule which allows connections to PostgreSQL:</p>
<pre><code class="language-sh">; sudo firewall-cmd --permanent --new-service=postgres
; sudo firewall-cmd --permanent --service=postgres --add-port=5432/tcp
; sudo firewall-cmd --permanent --add-service=postgres
; sudo firewall-cmd --reload
</code></pre>
</li>
<li>
<p>Now test that you can login over the network. From another machine (assuming you have the same username):</p>
<pre><code class="language-sh">; psql postgres://db.home.arpa
Password for user cptaffe:
psql (14.9 (Homebrew), server 15.4)
WARNING: psql major version 14, server major version 15.
        Some psql features might not work.
Type &quot;help&quot; for help.

cptaffe=&gt;
</code></pre>
</li>
</ol>
<h3 id="install-clickhouse">Install ClickHouse</h3>
<ol>
<li>
<p>On the same instance, or another dedicated instance, install ClickHouse:</p>
<pre><code class="language-sh">; sudo yum install -y yum-utils
; sudo yum-config-manager --add-repo https://packages.clickhouse.com/rpm/clickhouse.repo
; sudo yum install -y clickhouse-server clickhouse-client
; sudo systemctl enable --now clickhouse-server
</code></pre>
</li>
<li>
<p>Edit <code>/etc/clickhouse-server/config.xml</code> to enable listening for remote connections:</p>
<pre><code class="language-xml">&lt;listen_host&gt;::&lt;/listen_host&gt;
</code></pre>
<p>I also set</p>
<pre><code class="language-xml">&lt;display_name&gt;db.home.arpa&lt;/display_name&gt;
</code></pre>
<p>and commented out any unused protocols like <code>mysql_port</code>, <code>postgresql_port</code>, etc.</p>
</li>
<li>
<p>Generate a random password and a hash for that password:</p>
<pre><code class="language-sh">; PASSWORD=$(base64 &lt; /dev/urandom | head -c8); echo &quot;$PASSWORD&quot;; echo -n &quot;$PASSWORD&quot; | sha256sum | tr -d '-'
</code></pre>
<p>Save this password in a password manager.</p>
<p>Then edit `` and add the line:</p>
<pre><code class="language-xml">&lt;password_sha256_hex&gt;xyz&lt;/password_sha256_hex&gt;
</code></pre>
<p>where <code>xyz</code> is replaced with the password hash from the above command.</p>
</li>
<li>
<p>Restart the service</p>
<pre><code class="language-sh">; sudo systemctl restart clickhouse-server
</code></pre>
<p>and ensure you can connect to it:</p>
<pre><code class="language-sh">; clickhouse-client
Password for user (default):

db.home.arpa :)
</code></pre>
</li>
<li>
<p>Add firewall rules to allow connection to ClickHouse:</p>
<pre><code class="language-sh">; sudo firewall-cmd --permanent --new-service=clickhouse
; sudo firewall-cmd --permanent --service=clickhouse --add-port=9000/tcp
; sudo firewall-cmd --permanent --service=clickhouse --add-port=8123/tcp
; sudo firewall-cmd --permanent --add-service=clickhouse
; sudo firewall-cmd --reload
</code></pre>
</li>
</ol>
<h3 id="credentials">Credentials</h3>
<p>Next, we should create dedicated accounts on both systems for Plausible, to limit access.
From our VM, run the following commands for PostgreSQL, replacing <code>xyz</code> with a secure random password.</p>
<pre><code class="language-sh">sudo -u postgres psql
psql (15.4)
Type &quot;help&quot; for help.

postgres=# CREATE DATABASE plausible;
postgres=# CREATE USER plausible WITH ENCRYPTED PASSWORD 'xyz';
postgres=# GRANT ALL PRIVILEGES ON DATABASE plausible TO plausible;
postgres=# GRANT ALL ON SCHEMA public TO plausible;
</code></pre>
<p>Next do the same for ClickHouse:</p>
<pre><code class="language-sh">; clickhouse-client
Password for user (default):

db.home.arpa :) CREATE USER plausible IDENTIFIED WITH sha256_password BY 'xyz';
db.home.arpa :) CREATE DATABASE plausible;
db.home.arpa :) GRANT SELECT, INSERT, ALTER, CREATE DATABASE, CREATE TABLE, CREATE VIEW, CREATE DICTIONARY, DROP DATABASE, DROP TABLE, DROP VIEW, DROP DICTIONARY, TRUNCATE ON plausible.* TO plausible;
</code></pre>
<h3 id="kubernetes">Kubernetes</h3>
<p>What follows is the Kubernetes configuration I use for my Plausible setup.</p>
<ol>
<li>
<p>First, create a new namespace for Plausible:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Namespace
metadata:
name: plausible
</code></pre>
</li>
<li>
<p>Create a secret in that namespace populated with the login information from above:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Secret
metadata:
name: plausible
namespace: plausible
type: Opaque
stringData:
BASE_URL: https://plausible.example.com
SECRET_KEY_BASE:
MAXMIND_LICENSE_KEY:
MAXMIND_EDITION: GeoLite2-City
GOOGLE_CLIENT_ID:
GOOGLE_CLIENT_SECRET:
DATABASE_URL: postgres://plausible:xyz@db.home.arpa:5432/plausible
CLICKHOUSE_DATABASE_URL: http://plausible:xyz@db.home.arpa:8123/plausible
DISABLE_REGISTRATION: invite_only
</code></pre>
<p>See the <a href="https://github.com/plausible/community-edition?tab=readme-ov-file#configure">documentation</a> for details on configuration. Replace <code>BASE_URL</code> with the Internet-accessible domain name of your instance.</p>
<p>A new <code>SECRET_KEY_BASE</code> value can be generated simply with:</p>
<pre><code class="language-sh">; head -c 64 &lt; /dev/urandom | base64
</code></pre>
</li>
<li>
<p>Create the deployment which will be configured by the secret:</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
name: plausible
namespace: plausible
spec:
selector:
    matchLabels:
    app: plausible

template:
    metadata:
    labels:
        app: plausible
    spec:
    containers:
        - name: plausible
        image: plausible/analytics:latest
        command: [&quot;/bin/sh&quot;]
        args:
            [
            &quot;-c&quot;,
            &quot;sleep 10 &amp;&amp; /entrypoint.sh db createdb &amp;&amp; /entrypoint.sh db migrate &amp;&amp; /entrypoint.sh run&quot;,
            ]
        ports:
            - name: http
            containerPort: 8000
        envFrom:
            - secretRef:
                name: plausible
</code></pre>
</li>
<li>
<p>Create the service which will make our Plausible instance accessible on our local network:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Service
metadata:
name: plausible
namespace: plausible
spec:
selector:
    app: plausible
ports:
    - name: http
    port: 80
    targetPort: http
</code></pre>
<p>Once Plausible is running, navigate to it and set up your account.
On my network, pfSense delegates <code>k8s.home.arpa</code> to Kubernetes, so we can navigate to <code>https://plausible.plausible.svc.k8s.home.arpa/</code>.</p>
</li>
<li>
<p>Finally, create a new Ingress which will make the service available from the Internet.
On my cluster, <a href="https://github.com/travisghansen/kubernetes-pfsense-controller"><code>kubernetes-pfsense-controller</code></a> syncs the Ingress configuration to HAProxy running on pfSense.</p>
<pre><code class="language-yaml">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: plausible
namespace: plausible
spec:
ingressClassName: traefik
rules:
- host: plausible.example.com
    http:
    paths:
    - backend:
        service:
            name: plausible
            port:
            name: http
        path: /
        pathType: Prefix
</code></pre>
</li>
</ol>
<h3 id="dns">DNS</h3>
<p>We need a new DNS entry for our Plausible server's domain. Navigate to your DNS provider and mirror the <code>A</code> or <code>AAA</code> records for your main domain for your new Plausible domain.</p>
<figure>
<img src="/resources/images/2024-05-20-analytics-with-plausible/cloudflare.png" alt="CloudFlare DNS Configuration" />
<figcaption>CloudFlare DNS Configuration</figcaption>
</figure>
<h3 id="pfsense">pfSense</h3>
<p>Configuration of HAProxy is automatically handled by <code>kubernetes-pfsense-controller</code>, so we only need to ensure our ACME certificate can handle our new <code>plausible.example.com</code> domain.</p>
<ol>
<li>Navigate to Services, ACME Certificates.</li>
<li>Click edit on your certificate.</li>
<li>In the Domain SAN List, add our new domain name; copy the e.g. webroot configuration from other domains.</li>
<li>Then back at the certificates list, click Issue/Renew on the certificate to ask Let's Encrypt to issue a new certificate with the updated domains list.</li>
</ol>
<p>If successful, we have our updated certificate and HTTPS will work on our Plausible services.</p>
<h2 id="setup">Setup</h2>
<p>Now that our Plausible server is accessible from the Internet, and we've created an account, we can add the analytics script to our site. For each page or template, add the following XML<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> snippet to the bottom of the <code>&lt;header&gt;</code> tag:</p>
<pre><code class="language-xhtml">&lt;script async=&quot;async&quot; data-domain=&quot;example.com&quot; src=&quot;https://plausible.example.com/js/script.js&quot;&gt;&lt;/script&gt;
</code></pre>
<p>This differs from the snippet Plausible provides in two ways:</p>
<ul>
<li>
<p>It uses the <code>attribute=&quot;attribute&quot;</code> form to side-step XML's lack of support for valueless attributes.</p>
</li>
<li>
<p>It uses <code>async</code> instead of <code>defer</code>. A <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script">script</a> using <code>defer</code> will not block the parsing of the HTML, but <em>will</em> block rendering of the page; whereas <code>async</code> will not block rendering of the page. This means that <code>defer</code> will break your page if the Plausible server is unreachable, slow, etc.</p>
<p>I ran into this issue firsthand when I navigated to my blog and realized that my employer blocks <code>.zip</code> domains from resolving via DNS until they are allow-listed. My blog resolved without issue, but the subdomain failed to resolve and failed to render the page. I believe this condition can be replicated using a DNS-based ad-blocker like a Pi-hole.</p>
</li>
</ul>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>Yes, HTML 5 supports <a href="https://www.w3.org/blog/2008/html5-is-html-and-xml/">XML serialization</a>.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2024-05-05-pki</id>
    <title>PKI at Home</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2024-05-05-pki" />
    <published>2024-05-05T21:30:00-05:00</published>
    <summary>Setting up a private key infrastructure in a home lab environment</summary>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>When running services at home, the proliferation of self-signed certificates and browser errors to click through can become a pain point. By creating our own internal Certificate Authority (CA) and loading that certificate onto relevant machines, we can access these services securely and simply.</p>
<p>To do this, I use Cloudflare's <a href="https://github.com/cloudflare/cfssl"><code>cfssl</code></a> and follow a simple scheme I've encoded in my <a href="https://github.com/cptaffe/certs"><code>certs</code> repo</a>, orchestrated by the <code>Makefile</code>. This system is based on a pattern Rob Blackbourn <a href="https://rob-blackbourn.medium.com/how-to-use-cfssl-to-create-self-signed-certificates-d55f76ba5781">wrote about</a>.</p>
<h2 id="make">Make</h2>
<p>Our goal is to create a Certificate Authority (CA), Intermediate CA, and certificates signed by the Intermediate CA for each of our &quot;servers&quot; or services. If I want to construct a certificate just for my VMWare server, then my top-level goal could be:</p>
<pre><code>.PHONY: certs
certs: \
	servers/vms/vms.home.arpa-server.pem \
	servers/vms/vms.home.arpa-server-key.pem
</code></pre>
<p>All this says is that there is a goal named <code>certs</code> that is not a file (<code>.PHONY</code>), and it requires two files: the public and private certificates, which I name after the fully qualfied domain of the service on my network: <code>vms.home.arpa</code>. Given this goal, <code>make</code> looks for a way to create the two named files.</p>
<p>To do this, I have a <a href="https://www.gnu.org/software/make/manual/html_node/Pattern-Intro.html">pattern rules</a> which will construct set of public and private keys:</p>
<pre><code>servers/%-server.pem servers/%-server-key.pem: servers/%.json intermediate-ca.pem intermediate-ca-key.pem cfssl.json
	cfssl gencert -ca intermediate-ca.pem -ca-key intermediate-ca-key.pem -config cfssl.json -profile=server $&lt; | cfssljson -bare $(basename $&lt;)-server
</code></pre>
<p>This rule requires a configuration file, <code>servers/vms/vms.home.arpa.json</code> along with keys for the Intermediate CA adn the global config. The two configs we will have to provide, and are illustrated below:</p>
<p>The file <a href="https://github.com/cptaffe/certs/blob/main/servers/vms/vms.home.arpa.json"><code>servers/vms/vms.home.arpa.json</code></a> looks like:</p>
<pre><code class="language-json">{
    &quot;CN&quot;: &quot;vms.home.arpa&quot;,
    &quot;key&quot;: {
        &quot;algo&quot;: &quot;rsa&quot;,
        &quot;size&quot;: 2048
    },
    &quot;names&quot;: [
        {
            &quot;C&quot;: &quot;US&quot;,
            &quot;ST&quot;: &quot;Arkansas&quot;,
            &quot;L&quot;: &quot;Little Rock&quot;,
            &quot;O&quot;: &quot;Heavy Computer&quot;,
            &quot;OU&quot;: &quot;Heavy Computer Registry&quot;
        }
    ],
    &quot;hosts&quot;: [
        &quot;vms.home.arpa&quot;,
        &quot;localhost&quot;,
        &quot;10.0.3.1&quot;
    ]
}
</code></pre>
<p>For the <code>CN</code> I use the fully qualified internal domain. I also add it in <code>hosts</code> alongside the persistent IP address configured via DHCP and <code>localhost</code> for convenience and debugging purposes. The <code>key</code> section determines what size and algorithm the certificate will use, we use an RSA key of length 2048 which is the current NIST suggested minimum.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> The <code>names</code> section is not required to be accurate.</p>
<p>The file <a href="https://github.com/cptaffe/certs/blob/main/cfssl.json"><code>cfssl.json</code></a> looks like:</p>
<pre><code class="language-json">{
    &quot;signing&quot;: {
        &quot;default&quot;: {
            &quot;expiry&quot;: &quot;87600h&quot;
        },
        &quot;profiles&quot;: {
            &quot;intermediate-ca&quot;: {
                &quot;usages&quot;: [
                    &quot;signing&quot;,
                    &quot;digital signature&quot;,
                    &quot;key encipherment&quot;,
                    &quot;cert sign&quot;,
                    &quot;crl sign&quot;,
                    &quot;server auth&quot;,
                    &quot;client auth&quot;
                ],
                &quot;expiry&quot;: &quot;87600h&quot;,
                &quot;ca_constraint&quot;: {
                    &quot;is_ca&quot;: true,
                    &quot;max_path_len&quot;: 0,
                    &quot;max_path_len_zero&quot;: true
                }
            },
            &quot;peer&quot;: {
                &quot;usages&quot;: [
                    &quot;signing&quot;,
                    &quot;digital signature&quot;,
                    &quot;key encipherment&quot;,
                    &quot;client auth&quot;,
                    &quot;server auth&quot;
                ],
                &quot;expiry&quot;: &quot;87600h&quot;
            },
            &quot;server&quot;: {
                &quot;usages&quot;: [
                    &quot;signing&quot;,
                    &quot;digital signing&quot;,
                    &quot;key encipherment&quot;,
                    &quot;server auth&quot;
                ],
                &quot;expiry&quot;: &quot;87600h&quot;
            },
            &quot;client&quot;: {
                &quot;usages&quot;: [
                    &quot;signing&quot;,
                    &quot;digital signature&quot;,
                    &quot;key encipherment&quot;,
                    &quot;client auth&quot;
                ],
                &quot;expiry&quot;: &quot;87600h&quot;
            }
        }
    }
}
</code></pre>
<p>At the moment we are only using the <code>server</code> profile as indicated by <code>-profile=server</code> in our pattern, but we will later reference <code>intermediate-ca</code>. I use ten years for all the expirations, I am not concerned about man-in-the-middle attacks using leaked certificates; for your use-case set whatever is appropriate.</p>
<h3 id="intermediate-ca">Intermediate CA</h3>
<p>Now that we've covered the config files, our rule still needs an Intermediate CA, which is where the next rule comes in:</p>
<pre><code>intermediate-ca.pem intermediate-ca-key.pem: ca.pem intermediate-ca.json cfssl.json
	cfssl gencert -initca intermediate-ca.json | cfssljson -bare intermediate-ca
	cfssl sign -ca ca.pem -ca-key ca-key.pem -config cfssl.json -profile intermediate-ca intermediate-ca.csr | cfssljson -bare intermediate-ca
</code></pre>
<p>This rule creates an Intermediate CA adn signs it with the CA. The rule requires the CA certificates, Intermediate CA configuration, and global configuration. The global configuration is covered above, so I'll reproduce <a href="https://github.com/cptaffe/certs/blob/main/intermediate-ca.json"><code>intermediate-ca.json</code></a> below:</p>
<pre><code class="language-json">{
    &quot;CN&quot;: &quot;Heavy Computer Intermediate CA&quot;,
    &quot;key&quot;: {
        &quot;algo&quot;: &quot;rsa&quot;,
        &quot;size&quot;: 2048
    },
    &quot;names&quot;: [
        {
            &quot;C&quot;: &quot;US&quot;,
            &quot;ST&quot;: &quot;Arkansas&quot;,
            &quot;L&quot;: &quot;Little Rock&quot;,
            &quot;O&quot;: &quot;Heavy Computer&quot;,
            &quot;OU&quot;: &quot;Heavy Computer Intermediate CA&quot;
        }
    ],
    &quot;ca&quot;: {
        &quot;expiry&quot;: &quot;87600h&quot;
    }
}
</code></pre>
<p>You'll notice its very similar to the <code>vms.home.arpa.json</code> configuration above, but with a CA expiry field configured to ten years.</p>
<h3 id="ca">CA</h3>
<p>The CA is constructed by another rule:</p>
<pre><code>ca.pem ca-key.pme: ca.json
	cfssl gencert -initca ca.json | cfssljson -bare ca
	# Add to macOS keychain
	sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.pem
</code></pre>
<p>This rule not only constructs the CA, but adds it to my MacBook's system keychain. It requires a CA configuration, <a href="https://github.com/cptaffe/certs/blob/main/ca.json"><code>ca.json</code></a> is reproduced below:</p>
<pre><code class="language-json">{
    &quot;CN&quot;: &quot;Heavy Computer Root CA&quot;,
    &quot;CA&quot;: {
        &quot;expiry&quot;: &quot;87600h&quot;
    },
    &quot;key&quot;: {
        &quot;algo&quot;: &quot;rsa&quot;,
        &quot;size&quot;: 2048
    },
    &quot;names&quot;: [
        {
            &quot;O&quot;: &quot;Heavy Computer&quot;,
            &quot;OU&quot;: &quot;Heavy Computer Root CA&quot;,
            &quot;L&quot;: &quot;Little Rock&quot;,
            &quot;ST&quot;: &quot;Arkansas&quot;,
            &quot;C&quot;: &quot;US&quot;
        }
    ]
}
</code></pre>
<p>You'll notice its almost identical to the Intermediate CA, except <code>ca.expiry</code> has moved to <code>CA.expiry</code>.</p>
<h3 id="sync">Sync</h3>
<p>At the end of the Makefile, there's a sync rule:</p>
<pre><code>\.synced: certs
	gsutil -q -m rsync -u -x '(?!^.*\.pem$$)' -r . gs://certs.connor.zip
	gsutil -q -m rsync -u -x '(?!^.*\.pem$$)' -r gs://certs.connor.zip .
	date -u +'%Y-%m-%dT%H:%M:%SZ' &gt;$@
</code></pre>
<p>This rule copies all generated <code>.pem</code> files to a GCS bucket using <a href="https://cloud.google.com/storage/docs/gsutil/commands/rsync"><code>gsutil rsync</code></a>, then syncs any files from the bucket back to our local filesystem.</p>
<h2 id="usage">Usage</h2>
<p>You can clone my <a href="https://github.com/cptaffe/certs"><code>certs</code> repo</a> and use it to generate your own certs. Simply:</p>
<ol>
<li>Delete or reconfigure the <code>\.synced</code> rule in the <code>Makefile</code>.</li>
<li>If not on macOS, remove the <code>sudo security add-trusted-cert ...</code> line from the CA rule.</li>
<li>Remove folders under <code>clients/</code> and <code>servers/</code> and replace them with services of your own.</li>
<li>Edit the <code>certs</code> rule in the <code>Makefile</code> to reflect only your new folders, each must contain a config with the same prefix as your <code>.pem</code> files. For instance, <code>servers/vms/vms.home.arpa.json</code> matches <code>servers/vms/vms.home.arpa-server.pem</code> and <code>servers/vms/vms.home.arpa-server-key.pem</code>.</li>
<li>Run <code>brew install cfssl</code> to install the utility.</li>
</ol>
<p>Then just run <code>make</code> from the root of the repository to generate all required certificates.</p>
<p>Follow similar instructions except in <code>clients</code> to create client certificates like those for IRC.</p>
<h2 id="configuring-services">Configuring Services</h2>
<p>This section outlines instructions for some services I use on my network.</p>
<h3 id="vmware">VMWare</h3>
<p>To install a new certificate on VMWare 6.x, follow the instructions <a href="https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-AC7E6DD7-F984-4E0F-983A-463031BA5FE7.html#GUID-A261E6D8-03E4-48ED-ADB6-473C2DAAB7AD__GUID-2656823A-F776-468A-9CDF-E4D71F97D3BF">here</a>.</p>
<ol>
<li>Navigate to the Web UI and from the Host page click Actions, Services, Enable Secure Shell (SSH)</li>
<li>Create a combined certificates file:
<pre><code class="language-sh">cat servers/vms/vms.home.arpa-server.pem intermediate-ca.pem ca.pem &gt; vms.home.arpa-chain.pem
</code></pre>
</li>
<li>Copy the certificates, using the hostname on your network:
<pre><code class="language-sh">scp vms.home.arpa-chain.pem servers/vms/vms.home.arpa-server-key.pem vms.home.arpa:.
</code></pre>
</li>
<li>Log into the node using SSH:
<pre><code class="language-sh">ssh vms.home.arpa
</code></pre>
</li>
<li>Move the existing certs to backup files:
<pre><code class="language-sh">mv /etc/vmware/ssl/rui.crt /etc/vmware/ssl/orig.rui.crt
mv /etc/vmware/ssl/rui.key /etc/vmware/ssl/orig.rui.key
</code></pre>
</li>
<li>Copy our new certificates into position:
<pre><code class="language-sh">mv vms.home.arpa-chain.pem /etc/vmware/ssl/rui.crt
mv vms.home.arpa-server-key.pem /etc/vmware/ssl/rui.key
</code></pre>
</li>
<li>Reboot VMWare</li>
</ol>
<p>Once the machine comes back up, navigating to the UI should show no TLS errors from our browser, assuming our CA is in the system keychain.</p>
<h3 id="irc">IRC</h3>
<p>Client certificates (generated under <code>clients/</code>) can be used for CertFP authentication with IRC networks.</p>
<p>To get the fingerprint for e.g. OFTC (see <a href="https://www.oftc.net/NickServ/CertFP/">Automatically Identifying Using SSL + CertFP</a>):</p>
<pre><code>cat clients/irssi/irssi-client-key.pem clients/irssi/irssi-client.pem | openssl x509 -noout -fingerprint -sha1 | awk -F= '{gsub(&quot;:&quot;,&quot;&quot;); print $2}'
</code></pre>
<p>Or for Libera (see <a href="https://libera.chat/guides/certfp">Using CertFP</a>), which uses SHA-512:</p>
<pre><code>cat clients/irssi/irssi-client-key.pem clients/irssi/irssi-client.pem | openssl x509 -noout -fingerprint -sha512 | awk -F= '{gsub(&quot;:&quot;,&quot;&quot;); print tolower($2)}'
</code></pre>
<p>To get a certificate to paste into ZNC's User Modules &gt; Certificate form:</p>
<pre><code>cat clients/irssi/irssi-client-key.pem clients/irssi/irssi-client.pem | pbcopy
</code></pre>
<p>For more information on ZNC, see my <a href="/posts/2023-09-05-znc">more in-depth article</a>.</p>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>NIST Special Publication 800-57 Part 3, <a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57Pt3r1.pdf">Recommendation for Key Management</a> summarizes recommendations in section 2.2.1 Recommended Key Sizes and Algorithms, Table 2-1. Reproduced below:</p>
<table>
<thead>
<tr>
<th>Key Type</th>
<th>Algorithms and Key Sizes</th>
</tr>
</thead>
<tbody>
<tr>
<td>Digital Signature keys used for authentication (for Users or Devices)</td>
<td>RSA (2048 bits), ECDSA (Curve P-256)</td>
</tr>
<tr>
<td>Digital Signature keys used for non-repudiation (for Users or Devices)</td>
<td>RSA (2048 bits), ECDSA (Curves P-256 or P-384)</td>
</tr>
<tr>
<td>CA and OCSP Responder Signing Keys</td>
<td>RSA (2048 or 3072bits), ECDSA (Curves P-256 or P-384)</td>
</tr>
<tr>
<td>Key Establishment keys (for Users or Devices)</td>
<td>RSA (2048 bits), Diffie-Hellman (2048 bits), ECDH (Curves P-256 or P-384)</td>
</tr>
</tbody>
</table>
&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2024-02-23-k8s-dns</id>
    <title>It was DNS</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2024-02-23-k8s-dns" />
    <published>2024-02-23T00:00:00-05:00</published>
    <summary>Debugging and customizing DNS with kubeadm</summary>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>I use <code>kubeadm</code> for my home k8s cluster that I run my blog on. Yesterday, I refactored my blog server to push static resources to Google Cloud Storage and then pull those resources down in an init container, instead of packaging them in the container image. Originally, I planned to redirect to a <a href="https://cloud.google.com/storage/docs/access-control/signed-urls">signed GCS URL</a> from my server application and have GCS handle serving static resources to the browser, but I ran into issues with page load time even with <code>Cache-Control</code> set to 48 hours. Eventually, I'd like to use a k8s <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/">storage class</a> and push resources to NFS or similar served by a redundant local deployment.</p>
<p>The previous flow:</p>
<figure class="graphviz">
<svg width="570pt" height="264pt" viewBox="0.00 0.00 569.74 264.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 260)"><polygon fill="white" stroke="none" points="-4,4 -4,-260 565.74,-260 565.74,4 -4,4"/><g id="clust1" class="cluster"><title>cluster_repo</title><polygon fill="none" stroke="black" points="8,-8 8,-248 216,-248 216,-8 8,-8"/><text text-anchor="middle" x="112" y="-230.7" font-family="Times,serif" font-size="14.00">Repo</text></g><!-- resources --><g id="node1" class="node"><title>resources</title><polygon fill="none" stroke="black" points="176.12,-214 41.88,-214 41.88,-160 182.12,-160 182.12,-208 176.12,-214"/><polyline fill="none" stroke="black" points="176.12,-214 176.12,-208"/><polyline fill="none" stroke="black" points="182.12,-208 176.12,-208"/>
<text text-anchor="middle" x="112" y="-182.7" font-family="Times,serif" font-size="14.00">Static Resources</text>
</g>
<!-- container -->
<g id="node4" class="node">
<title>container</title>
<polygon fill="none" stroke="black" points="404.37,-153 309.37,-153 305.37,-149 305.37,-99 400.37,-99 404.37,-103 404.37,-153"/>
<polyline fill="none" stroke="black" points="400.37,-149 305.37,-149"/>
<polyline fill="none" stroke="black" points="400.37,-149 400.37,-99"/>
<polyline fill="none" stroke="black" points="400.37,-149 404.37,-153"/>
<text text-anchor="middle" x="354.87" y="-121.7" font-family="Times,serif" font-size="14.00">Container</text>
</g>
<!-- resources&#45;&gt;container -->
<g id="edge1" class="edge">
<title>resources&#45;&gt;container</title>
<path fill="none" stroke="black" d="M182.35,-169.44C217.68,-160.49 260.22,-149.72 294.1,-141.14"/>
<polygon fill="black" stroke="black" points="294.93,-144.54 303.76,-138.69 293.21,-137.75 294.93,-144.54"/>
</g>
<!-- code -->
<g id="node2" class="node">
<title>code</title>
<polygon fill="none" stroke="black" points="202,-142 16,-142 16,-88 208,-88 208,-136 202,-142"/>
<polyline fill="none" stroke="black" points="202,-142 202,-136"/>
<polyline fill="none" stroke="black" points="208,-136 202,-136"/>
<text text-anchor="middle" x="112" y="-110.7" font-family="Times,serif" font-size="14.00">Server, Templates, Posts</text>
</g>
<!-- code&#45;&gt;container -->
<g id="edge2" class="edge">
<title>code&#45;&gt;container</title>
<path fill="none" stroke="black" d="M208.2,-119.35C236.93,-120.66 267.68,-122.06 293.48,-123.24"/>
<polygon fill="black" stroke="black" points="293.3,-126.74 303.45,-123.7 293.62,-119.74 293.3,-126.74"/>
</g>
<!-- deployment -->
<g id="node3" class="node">
<title>deployment</title>
<polygon fill="none" stroke="black" points="182.5,-70 35.5,-70 35.5,-16 188.5,-16 188.5,-64 182.5,-70"/>
<polyline fill="none" stroke="black" points="182.5,-70 182.5,-64"/>
<polyline fill="none" stroke="black" points="188.5,-64 182.5,-64"/>
<text text-anchor="middle" x="112" y="-38.7" font-family="Times,serif" font-size="14.00">Deployment YAML</text>
</g>
<!-- k8s -->
<g id="node6" class="node">
<title>k8s</title>
<ellipse fill="none" stroke="black" cx="354.87" cy="-43" rx="76.37" ry="38.18"/>
<text text-anchor="middle" x="354.87" y="-38.7" font-family="Times,serif" font-size="14.00">Kubernetes</text>
</g>
<!-- deployment&#45;&gt;k8s -->
<g id="edge4" class="edge">
<title>deployment&#45;&gt;k8s</title>
<path fill="none" stroke="black" d="M188.88,-43C213.75,-43 241.56,-43 267.16,-43"/>
<polygon fill="black" stroke="black" points="266.85,-46.5 276.85,-43 266.85,-39.5 266.85,-46.5"/>
<text text-anchor="middle" x="243.25" y="-47.7" font-family="Times,serif" font-size="14.00">apply</text>
</g>
<!-- gcr -->
<g id="node5" class="node">
<title>gcr</title>
<polygon fill="none" stroke="black" points="561.74,-111 558.74,-115 537.74,-115 534.74,-111 497.99,-111 497.99,-57 561.74,-57 561.74,-111"/>
<text text-anchor="middle" x="529.86" y="-79.7" font-family="Times,serif" font-size="14.00">GCR</text>
</g>
<!-- container&#45;&gt;gcr -->
<g id="edge3" class="edge">
<title>container&#45;&gt;gcr</title>
<path fill="none" stroke="black" d="M404.52,-114.19C430.43,-107.9 461.99,-100.24 486.82,-94.21"/>
<polygon fill="black" stroke="black" points="487.36,-97.68 496.25,-91.92 485.71,-90.88 487.36,-97.68"/>
<text text-anchor="middle" x="464.61" y="-107.19" font-family="Times,serif" font-size="14.00">push</text>
</g>
<!-- k8s&#45;&gt;gcr -->
<g id="edge5" class="edge">
<title>k8s&#45;&gt;gcr</title>
<path fill="none" stroke="black" d="M435.4,-61.85C457.59,-67.11 480.4,-72.51 497.93,-76.67"/>
<polygon fill="black" stroke="black" points="436.25,-58.45 425.71,-59.55 434.63,-65.27 436.25,-58.45"/>
<text text-anchor="middle" x="464.61" y="-76.58" font-family="Times,serif" font-size="14.00">pull</text>
</g>
</g>
</svg>
</figure>
<p>The updated flow:</p>
<figure class="graphviz">
<svg width="573pt" height="264pt" viewBox="0.00 0.00 572.74 264.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 260)"><polygon fill="white" stroke="none" points="-4,4 -4,-260 568.74,-260 568.74,4 -4,4"/><g id="clust1" class="cluster"><title>cluster_repo</title><polygon fill="none" stroke="black" points="8,-8 8,-248 216,-248 216,-8 8,-8"/><text text-anchor="middle" x="112" y="-230.7" font-family="Times,serif" font-size="14.00">Repo</text></g><!-- resources --><g id="node1" class="node"><title>resources</title><polygon fill="none" stroke="black" points="176.12,-214 41.88,-214 41.88,-160 182.12,-160 182.12,-208 176.12,-214"/><polyline fill="none" stroke="black" points="176.12,-214 176.12,-208"/><polyline fill="none" stroke="black" points="182.12,-208 176.12,-208"/>
<text text-anchor="middle" x="112" y="-182.7" font-family="Times,serif" font-size="14.00">Static Resources</text>
</g>
<!-- gcs -->
<g id="node6" class="node">
<title>gcs</title>
<polygon fill="none" stroke="black" points="563.99,-180 560.99,-184 539.99,-184 536.99,-180 501.74,-180 501.74,-126 563.99,-126 563.99,-180"/>
<text text-anchor="middle" x="532.86" y="-148.7" font-family="Times,serif" font-size="14.00">GCS</text>
</g>
<!-- resources&#45;&gt;gcs -->
<g id="edge1" class="edge">
<title>resources&#45;&gt;gcs</title>
<path fill="none" stroke="black" d="M182.55,-185.97C257.16,-184.11 378.97,-178.84 482.99,-164 485.43,-163.65 487.94,-163.24 490.46,-162.79"/>
<polygon fill="black" stroke="black" points="490.88,-166.28 500.01,-160.91 489.52,-159.41 490.88,-166.28"/>
<text text-anchor="middle" x="354.87" y="-186.85" font-family="Times,serif" font-size="14.00">rsync</text>
</g>
<!-- code -->
<g id="node2" class="node">
<title>code</title>
<polygon fill="none" stroke="black" points="202,-70 16,-70 16,-16 208,-16 208,-64 202,-70"/>
<polyline fill="none" stroke="black" points="202,-70 202,-64"/>
<polyline fill="none" stroke="black" points="208,-64 202,-64"/>
<text text-anchor="middle" x="112" y="-38.7" font-family="Times,serif" font-size="14.00">Server, Templates, Posts</text>
</g>
<!-- container -->
<g id="node4" class="node">
<title>container</title>
<polygon fill="none" stroke="black" points="404.37,-59 309.37,-59 305.37,-55 305.37,-5 400.37,-5 404.37,-9 404.37,-59"/>
<polyline fill="none" stroke="black" points="400.37,-55 305.37,-55"/>
<polyline fill="none" stroke="black" points="400.37,-55 400.37,-5"/>
<polyline fill="none" stroke="black" points="400.37,-55 404.37,-59"/>
<text text-anchor="middle" x="354.87" y="-27.7" font-family="Times,serif" font-size="14.00">Container</text>
</g>
<!-- code&#45;&gt;container -->
<g id="edge2" class="edge">
<title>code&#45;&gt;container</title>
<path fill="none" stroke="black" d="M208.2,-38.65C236.93,-37.34 267.68,-35.94 293.48,-34.76"/>
<polygon fill="black" stroke="black" points="293.62,-38.26 303.45,-34.3 293.3,-31.26 293.62,-38.26"/>
</g>
<!-- deployment -->
<g id="node3" class="node">
<title>deployment</title>
<polygon fill="none" stroke="black" points="182.5,-142 35.5,-142 35.5,-88 188.5,-88 188.5,-136 182.5,-142"/>
<polyline fill="none" stroke="black" points="182.5,-142 182.5,-136"/>
<polyline fill="none" stroke="black" points="188.5,-136 182.5,-136"/>
<text text-anchor="middle" x="112" y="-110.7" font-family="Times,serif" font-size="14.00">Deployment YAML</text>
</g>
<!-- k8s -->
<g id="node7" class="node">
<title>k8s</title>
<ellipse fill="none" stroke="black" cx="354.87" cy="-115" rx="76.37" ry="38.18"/>
<text text-anchor="middle" x="354.87" y="-110.7" font-family="Times,serif" font-size="14.00">Kubernetes</text>
</g>
<!-- deployment&#45;&gt;k8s -->
<g id="edge4" class="edge">
<title>deployment&#45;&gt;k8s</title>
<path fill="none" stroke="black" d="M188.88,-115C213.75,-115 241.56,-115 267.16,-115"/>
<polygon fill="black" stroke="black" points="266.85,-118.5 276.85,-115 266.85,-111.5 266.85,-118.5"/>
<text text-anchor="middle" x="243.25" y="-119.7" font-family="Times,serif" font-size="14.00">apply</text>
</g>
<!-- gcr -->
<g id="node5" class="node">
<title>gcr</title>
<polygon fill="none" stroke="black" points="564.74,-103 561.74,-107 540.74,-107 537.74,-103 500.99,-103 500.99,-49 564.74,-49 564.74,-103"/>
<text text-anchor="middle" x="532.86" y="-71.7" font-family="Times,serif" font-size="14.00">GCR</text>
</g>
<!-- container&#45;&gt;gcr -->
<g id="edge3" class="edge">
<title>container&#45;&gt;gcr</title>
<path fill="none" stroke="black" d="M404.42,-44.14C431.19,-50.83 464.09,-59.06 489.76,-65.48"/>
<polygon fill="black" stroke="black" points="488.62,-68.8 499.17,-67.83 490.32,-62.01 488.62,-68.8"/>
<text text-anchor="middle" x="466.11" y="-67.7" font-family="Times,serif" font-size="14.00">push</text>
</g>
<!-- k8s&#45;&gt;gcr -->
<g id="edge6" class="edge">
<title>k8s&#45;&gt;gcr</title>
<path fill="none" stroke="black" d="M436.32,-97.17C459.2,-92.1 482.76,-86.88 500.76,-82.89"/>
<polygon fill="black" stroke="black" points="435.66,-93.73 426.65,-99.31 437.17,-100.57 435.66,-93.73"/>
<text text-anchor="middle" x="466.11" y="-97.86" font-family="Times,serif" font-size="14.00">pull</text>
</g>
<!-- k8s&#45;&gt;gcs -->
<g id="edge5" class="edge">
<title>k8s&#45;&gt;gcs</title>
<path fill="none" stroke="black" d="M436.64,-132.44C459.65,-137.41 483.32,-142.52 501.31,-146.4"/>
<polygon fill="black" stroke="black" points="437.42,-129.03 426.9,-130.34 435.94,-135.87 437.42,-129.03"/>
<text text-anchor="middle" x="466.11" y="-146.47" font-family="Times,serif" font-size="14.00">rsync</text>
</g>
</g>
</svg>
</figure>
<p>I added the following init contianer to my deployment to pull resources into a new <code>emptyDir</code> volume:</p>
<pre><code class="language-yaml">...
apiVersion: apps/v1
kind: Deployment
...
spec:
  ...
  template:
    ...
    spec:
      containers:
      - ...
        volumeMounts:
        - name: resources
          readOnly: true
          mountPath: /usr/src/blog/resources
      initContainers:
      - name: init
        image: gcr.io/google.com/cloudsdktool/google-cloud-cli:alpine
        command: [&quot;/bin/sh&quot;]
        args: [&quot;-c&quot;, &quot;gcloud auth activate-service-account --key-file=/var/gcp-creds/creds.json; gsutil -m rsync -r gs://connor.zip/resources /var/resources&quot;]
        volumeMounts:
        - name: gcp-creds
          readOnly: true
          mountPath: /var/gcp-creds
        - name: resources
          mountPath: /var/resources
      volumes:
      - name: gcp-creds
        secret:
          secretName: gcp-creds
      - name: resources
        emptyDir: {}
</code></pre>
<p>I used the <a href="https://cloud.google.com/sdk/docs/downloads-docker"><code>google-cloud-cli</code> container</a> for access to the <code>gsutil</code> command. The <code>gsutil</code> command doesn't look at the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable like the SDK client would, so we need to instead call</p>
<pre><code class="language-sh">; gcloud auth activate-service-account --key-file=/var/gcp-creds/creds.json
</code></pre>
<p>Once the init container scheduled, it wouldn't finish <a href="https://cloud.google.com/storage/docs/gsutil/commands/rsync">rsync</a>-ing files and instead showed:</p>
<pre><code class="language-sh">; kubectl get pods
NAME                        READY   STATUS             RESTARTS          AGE
blog-67f956dd6-26q85        0/1     Init:0/1           0                 7s
</code></pre>
<p>The logs for the pod showed a connectivity issue:</p>
<pre><code class="language-sh">; kubectl logs pods/blog-67f956dd6-26q85 -c init
INFO 0223 06:29:43.857160 retry_util.py] Retrying request, attempt #2...
</code></pre>
<p>So, why can't <code>gsutil</code> complete the request? Let's check the pod's connectivity:</p>
<pre><code class="language-sh">; kubectl exec -it blog-b85849cd-8j4mf -c init -- /bin/sh
/ # ping google.com
ping: bad address 'google.com'
/ # curl google.com
curl: (6) Could not resolve host: google.com
</code></pre>
<p>Now that we've identified the issue as DNS, we can follow the instructions at <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/">Debugging DNS Resolution</a>. The resolution flow in my network is shown below:</p>
<figure class="graphviz">
<svg width="607pt" height="62pt" viewBox="0.00 0.00 607.25 62.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 58)"><polygon fill="white" stroke="none" points="-4,4 -4,-58 603.25,-58 603.25,4 -4,4"/><!-- pod --><g id="node1" class="node"><title>pod</title><polygon fill="none" stroke="black" points="60.75,-54 0,-54 0,0 60.75,0 60.75,-54"/><text text-anchor="middle" x="30.38" y="-22.7" font-family="Times,serif" font-size="14.00">Pod</text></g><!-- dns1 --><g id="node2" class="node"><title>dns1</title><polygon fill="none" stroke="black" points="290.25,-54 195,-54 195,0 290.25,0 290.25,-54"/><text text-anchor="middle" x="242.62" y="-22.7" font-family="Times,serif" font-size="14.00">kube&#45;dns</text></g><!-- pod&#45;&gt;dns1 -->
<g id="edge1" class="edge">
<title>pod&#45;&gt;dns1</title>
<path fill="none" stroke="black" d="M61.05,-27C92.68,-27 143.57,-27 183.27,-27"/>
<polygon fill="black" stroke="black" points="183.16,-30.5 193.16,-27 183.16,-23.5 183.16,-30.5"/>
<text text-anchor="middle" x="127.88" y="-31.7" font-family="Times,serif" font-size="14.00">/etc/resolv.conf</text>
</g>
<!-- dns2 -->
<g id="node3" class="node">
<title>dns2</title>
<polygon fill="none" stroke="black" points="452.25,-54 366,-54 366,0 452.25,0 452.25,-54"/>
<text text-anchor="middle" x="409.12" y="-22.7" font-family="Times,serif" font-size="14.00">pfSense</text>
</g>
<!-- dns1&#45;&gt;dns2 -->
<g id="edge2" class="edge">
<title>dns1&#45;&gt;dns2</title>
<path fill="none" stroke="black" d="M290.32,-27C310.19,-27 333.48,-27 354.08,-27"/>
<polygon fill="black" stroke="black" points="354.02,-30.5 364.02,-27 354.02,-23.5 354.02,-30.5"/>
<text text-anchor="middle" x="328.12" y="-31.7" font-family="Times,serif" font-size="14.00">config</text>
</g>
<!-- dns3 -->
<g id="node4" class="node">
<title>dns3</title>
<polygon fill="none" stroke="black" points="599.25,-54 528,-54 528,0 599.25,0 599.25,-54"/>
<text text-anchor="middle" x="563.62" y="-22.7" font-family="Times,serif" font-size="14.00">1.1.1.1</text>
</g>
<!-- dns2&#45;&gt;dns3 -->
<g id="edge3" class="edge">
<title>dns2&#45;&gt;dns3</title>
<path fill="none" stroke="black" d="M452.6,-27C472.35,-27 495.88,-27 516.1,-27"/>
<polygon fill="black" stroke="black" points="516.06,-30.5 526.06,-27 516.06,-23.5 516.06,-30.5"/>
<text text-anchor="middle" x="490.12" y="-31.7" font-family="Times,serif" font-size="14.00">config</text>
</g>
</g>
</svg>
</figure>
<p>DNS resolution within a pod is managed by k8s, which manages the <code>/etc/resolv.conf</code> file. Our pod has one like:</p>
<pre><code class="language-sh">search default.svc.cluster.local svc.cluster.local cluster.local home.arpa
nameserver 10.96.0.10
options ndots:5
</code></pre>
<p>We should ensure our <code>kube-dns</code> service has the correct cluster IP:</p>
<pre><code class="language-sh">; kubectl get svc/kube-dns -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.3.0.10    &lt;none&gt;        53/UDP,53/TCP,9153/TCP   466d
</code></pre>
<p>There is a mismatch! For me, the reason is that the <code>kube-dns</code> IP is mismatched is that the worker's config wasn't reflecting the updated IP I chose for <code>kube-dns</code>. Previously, I had manually updated the <code>/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf</code> file on the node. When I updated the node, the <code>ConfigMap</code> stored in k8s overwrote this file. The section of k8s docs on <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/#updating-the-kubeletconfiguration">Updating the <code>KubeletConfiguration</code></a> instructs us to edit:</p>
<pre><code class="language-sh">; kubectl edit cm -n kube-system kubelet-config
</code></pre>
<p>In my cluster, the config contained:</p>
<pre><code>clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
</code></pre>
<p>which is outdated, it needed to be updated to:</p>
<pre><code>clusterDNS:
- 10.3.0.10
clusterDomain: k8s.home.arpa
</code></pre>
<p>This is because my <code>kube-dns</code> service exists within the <code>10.3.x.x</code> IP range, and because I want k8s DNS addresses to be placed under my home networks <code>home.arpa</code> DNS zone. The pfSense firewall is configured to delegate DNS for the <code>10.3.x.x</code> and <code>10.2.x.x</code> service and pod networks, as well as <code>k8s.home.arpa</code> to Kubernetes so that I can easily reach k8s resources from the rest of the network. For instance, this blog is accessible on my local network via <code>http://blog.default.svc.k8s.home.arpa/</code>.</p>
<figure>
<img src="/resources/images/2024-02-23-k8s-dns/pfsense-k8s.png" alt="Kubernetes Domain Overrides" />
<figcaption>Kubernetes Domain Overrides</figcaption>
</figure>
<p>Once updated, your nodes need to be updated:</p>
<pre><code class="language-sh">; sudo kubeadm upgrade node phase kubelet-config
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot;
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
; sudo systemctl restart kubelet.service
</code></pre>
<p>This will update the <code>/etc/resolv.conf</code> on pods created by the kubelet to reflect our new dns and domain configuration. We may need to roll pods so they reflect this.</p>
<p>The CoreDNS config can be edited with:</p>
<pre><code class="language-sh">; kubectl -n kube-system edit configmap coredns
</code></pre>
<p>which contains:</p>
<pre><code>.:53 {
    log
    errors
    health {
        lameduck 5s
    }
    ready
    kubernetes k8s.home.arpa in-addr.arpa ip6.arpa {
        pods insecure
        fallthrough in-addr.arpa ip6.arpa
        ttl 30
    }
    prometheus :9153
    forward . /etc/resolv.conf {
        max_concurrent 1000
    }
    cache 30
    loop
    reload
    loadbalance
}
</code></pre>
<p>The <code>log</code> line was added to help with debugging. The <code>k8s.home.arpa</code> mentions on the <code>kubernetes</code> line is meant to coincide with our <code>clusterDomain</code> configuration in kubelet, and the <code>/etc/resolv.conf</code> follows system configuration when forwarding. We can get this configuration by running a <code>dnsutils</code> pod:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Pod
metadata:
  name: dnsutils
  namespace: default
spec:
  containers:
  - name: dnsutils
    image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
    command:
      - sleep
      - &quot;infinity&quot;
    imagePullPolicy: IfNotPresent
  dnsPolicy: Default
  restartPolicy: Always
</code></pre>
<p>This pod will receive the same <code>/etc/resolv.conf</code> that CoreDNS will since it has <code>dnsPolicy: Default</code>. On my system, the kubelet is configured to pull <code>/etc/resolv.conf</code> from <code>/run/systemd/resolve/resolv.conf</code>, which contains:</p>
<pre><code>...
nameserver 10.0.0.1
nameserver 2600:1700:f08:111f:20c:29ff:fe6f:1149
nameserver 10.0.0.1
# Too many DNS servers configured, the following entries may be ignored.
nameserver 2600:1700:f08:111f:20c:29ff:fe6f:1149
search home.arpa
</code></pre>
<p>In the logs for CoreDNS, I noticed errors around resolution of GCS DNS:</p>
<pre><code>; kubectl logs --namespace=kube-system -l k8s-app=kube-dns -f
...
[INFO] 10.2.110.43:50804 - 1443 &quot;A IN storage.googleapis.com.home.arpa. udp 50 false 512&quot; SERVFAIL qr,aa,rd,ra 50 0.000196806s
</code></pre>
<p>The <code>search home.arpa</code> line is informing CoreDNS to test DNS addresses within <code>.home.arpa</code> as well as globally, but lookups are failing within the <code>.home.arpa</code> zone. It turns out that pfSense’s unbound can get overloaded if set in recursive mode. Since the <code>/etc/resolv.conf</code> <code>search</code> line contains home.arpa, every domain gets tested with that suffix e.g. <code>google.com.home.arpa</code>.</p>
<p>Under Services, <a href="https://docs.netgate.com/pfsense/en/latest/services/dns/resolver-config.html">DNS Resolver</a> in pfSense, I could see “System Domain Local Zone Type” configured as Transparent, which attempts to ask <em>upstream</em> if <code>x.home.arpa</code> wasn’t present. Swapping this to Static avoids this lookup and return an <code>NXDOMAIN</code> if no overrides are present, since pfSense should be the owner of <code>.home.arpa</code>. I also toggled on forwarding mode so that it hits the upstream DNS server for zones it doesn’t manage instead of being a recursive resolver itself.</p>
<figure>
<img src="/resources/images/2024-02-23-k8s-dns/pfsense-dns.png" alt="pfSense DNS Resolver" />
<figcaption>pfSense DNS Resolver</figcaption>
</figure>
<p>These changes solved my issue and the init container was able to pull resources from GCR. I later realized that setting the &quot;System Domain Local Zone Type&quot; to Static caused the Domain Overrides for <code>k8s.home.arpa</code> to fail, so I switched it back to Transparent. So far, DNS within Kubernetes is working properly:</p>
<pre><code class="language-sh">; kubectl exec -i -t dnsutils -- nslookup google.com
Server:		10.3.0.10
Address:	10.3.0.10#53

Non-authoritative answer:
Name:	google.com
Address: 142.250.113.101
Name:	google.com
Address: 142.250.113.100
Name:	google.com
Address: 142.250.113.138
Name:	google.com
Address: 142.250.113.102
Name:	google.com
Address: 142.250.113.113
Name:	google.com
Address: 142.250.113.139
</code></pre>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2024-02-16-hifi</id>
    <title>A HiFi Setup</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2024-02-16-hifi" />
    <published>2024-02-18T20:30:00-06:00</published>
    <summary>Integrating the Magnepan LRS into a home audio setup.</summary>
    
    <media:content url="https://connor.zip/resources/images/2024-02-16-hifi/bryston.jpg" medium="image" width="800" height="533"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Years ago, I decided to upgrade my <a href="https://edifier-online.com/us/en/speakers/s2000pro-bookshelf-speakers-studio-monitors">Edifier S2000 Pro</a> powered bookshelf speakers, and bought a pair of <a href="https://www.stereophile.com/content/magnepan-lrs-loudspeaker-0">Magnepan Little Ribbon Speakers (LRS)</a>. I had heard amazing things about ribbon speakers, but owning a pair of Magnepans had been out of reach for me until the release of the LRS. At the time of writing, the LRS has been discontinued and replaced with the more expensive <a href="https://magnepan.com/products/magnepan-lrs-1">LRS+</a>. I placed an order in early April of 2021, and they arrived in late October. The Edifiers are excellent all-in-one speakers which now reside in my friend's apartment in Philadelphia.</p>
<p>These speakers required <em>a lot</em> of power to drive, often paired with an amp like the <a href="https://nadelectronics.com/product/m33-bluos-streaming-dac-amplifier/">NAD M33</a> which can provide 380 watts into 4Ω. On their website, Wendell Diller wrote:</p>
<blockquote>
<p>The LRS is a full-range quasi-ribbon speaker that was designed from the ground up to give you a pretty good idea what to expect from the 20.7 or 30.7. The LRS was designed using high-end electronics and monoblocks. The LRS will perform nicely with a receiver, but it was intentionally designed to extract the most from high-end amplifiers and electronics. The LRS expects more from a properly designed high-current amplifier. That is a radical departure from most entry-level loudspeakers. If you put your expensive high-end amplifier on the LRS, you will hear the difference.</p>
</blockquote>
<p>High power amps are expenive, much more expensive than the LRS they'd be driving. To save money, I took to Facebook Marketplace and found an old Audio/Video Receiver: a Yamaha RX-V750. The Yamaha included good quality versions of all the components I needed: a DAC to handle TOSLINK audio out from my television; a preamp, and an amplifier which could handle a 4Ω load at 150 watts; a separate subwoofer output with configurable crossover; multiple analog inputs for a record player, tape deck, etc.; and a remote control -- for less than $100.</p>
<p>I paired these speakers with a <a href="https://www.rythmikaudio.com/L12.html">Rythmik L12</a> subwoofer and set the crossover to 80Hz.</p>
<figure class="graphviz">
<svg width="538pt" height="134pt" viewBox="0.00 0.00 537.50 134.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 130)"><polygon fill="white" stroke="none" points="-4,4 -4,-130 533.5,-130 533.5,4 -4,4"/><!-- atv --><g id="node1" class="node"><title>atv</title><polygon fill="none" stroke="black" points="92.25,-90 0,-90 0,-36 92.25,-36 92.25,-90"/><text text-anchor="middle" x="46.12" y="-58.7" font-family="Times,serif" font-size="14.00">Apple TV</text></g><!-- tv --><g id="node2" class="node"><title>tv</title><polygon fill="none" stroke="black" points="182.25,-90 128.25,-90 128.25,-36 182.25,-36 182.25,-90"/><text text-anchor="middle" x="155.25" y="-58.7" font-family="Times,serif" font-size="14.00">TV</text></g><!-- atv&#45;&gt;tv -->
<g id="edge1" class="edge">
<title>atv&#45;&gt;tv</title>
<path fill="none" stroke="black" d="M92.56,-63C100.58,-63 108.86,-63 116.64,-63"/>
<polygon fill="black" stroke="black" points="116.47,-66.5 126.47,-63 116.47,-59.5 116.47,-66.5"/>
</g>
<!-- avr -->
<g id="node3" class="node">
<title>avr</title>
<polygon fill="none" stroke="black" points="363.75,-90 218.25,-90 218.25,-36 363.75,-36 363.75,-90"/>
<text text-anchor="middle" x="291" y="-58.7" font-family="Times,serif" font-size="14.00">Yamaha RX&#45;V750</text>
</g>
<!-- tv&#45;&gt;avr -->
<g id="edge2" class="edge">
<title>tv&#45;&gt;avr</title>
<path fill="none" stroke="black" d="M182.67,-63C189.82,-63 197.95,-63 206.47,-63"/>
<polygon fill="black" stroke="black" points="206.33,-66.5 216.33,-63 206.33,-59.5 206.33,-66.5"/>
</g>
<!-- sub -->
<g id="node4" class="node">
<title>sub</title>
<polygon fill="none" stroke="black" points="520.5,-126 408.75,-126 408.75,-72 520.5,-72 520.5,-126"/>
<text text-anchor="middle" x="464.62" y="-94.7" font-family="Times,serif" font-size="14.00">Rythmik L12</text>
</g>
<!-- avr&#45;&gt;sub -->
<g id="edge3" class="edge">
<title>avr&#45;&gt;sub</title>
<path fill="none" stroke="black" d="M363.79,-78.06C374.83,-80.38 386.2,-82.76 397.12,-85.05"/>
<polygon fill="black" stroke="black" points="396.32,-88.46 406.83,-87.09 397.76,-81.61 396.32,-88.46"/>
</g>
<!-- speakers -->
<g id="node5" class="node">
<title>speakers</title>
<polygon fill="none" stroke="black" points="529.5,-54 399.75,-54 399.75,0 529.5,0 529.5,-54"/>
<text text-anchor="middle" x="464.62" y="-22.7" font-family="Times,serif" font-size="14.00">Magnepan LRS</text>
</g>
<!-- avr&#45;&gt;speakers -->
<g id="edge4" class="edge">
<title>avr&#45;&gt;speakers</title>
<path fill="none" stroke="black" d="M363.79,-47.94C371.94,-46.23 380.26,-44.49 388.44,-42.77"/>
<polygon fill="black" stroke="black" points="388.95,-46.24 398.02,-40.76 387.51,-39.39 388.95,-46.24"/>
</g>
</g>
</svg>
</figure>
<p>The issue with the Yamaha is that it would trigger protection at peak wattage, cutting out at high volume during movies or listening. So, I replaced the amplifier role with a dedicated one: a <a href="https://www.schiit.com/products/vidar2">Schiit Vidar</a>. Rated for 200 watts into 4Ω, it increased the peak volume but I <em>still</em> ran into its protection at high volumes. For years I planned to get a second Vidar and to run them in monoblock configuration, but the lack of 4Ω rating for the Vidar as a monoblock amplifier troubled me. I emailed Schiit and Daniel Katz responded:</p>
<blockquote>
<p>For the Vidar's it should be completely fine, but I do have to warn you that with some speakers that are 4 ohms in monoblock mode, it may go into protection as well, there is still that possibility.</p>
</blockquote>
<p>The other hurdle was that in monoblock mode, the Vidar only supports balanced inputs, but my Yamaha only outputs unblanaced (RCA) preamp outputs. The next step up, the <a href="https://www.schiit.com/products/tyr">Schiit Tyr</a>, does support unbalanced inputs but would be around three thousand dollars for two monoblock amplifiers which can produce 350 watts into 4Ω.</p>
<h2 id="enter-bryston">Enter, Bryston</h2>
<p>Late last year, I visited Memphis and stopped into the <a href="https://memphislisteninglab.org/">Memphis Listening Lab</a>, where they have a stereo setup estimated to cost a quarter million dollars. In that stack of equipment, I spied a Bryston amplifier. I wondered if a Bryston amp could solve my problem, and glancing at the specs for the <a href="https://bryston.com/amplifiers/4b3/">Brsyton 4B Cubed</a> I saw that it could output a mind-boggling 500 watts into 4Ω, but at a price tag of over seven thousand dollars -- out of my price range. But what about an older model? I read some <a href="https://www.soundstagenetwork.com/revequip/bryston_4b_st.htm">reviews</a> from the late 90s and searched on eBay, settling on a 4B-ST which is <a href="https://www.stereophile.com/content/bryston-4b-power-amplifier-measurements">rated</a> for 400 watts into 4Ω -- and totalling $1,400 shipped.</p>
<figure>
<img src="/resources/images/2024-02-16-hifi/bryston.jpg" alt="Bryston 4B-ST" />
<figcaption>Bryston 4B-ST</figcaption>
</figure>
<p>The Bryston has been incredible. Using the <a href="https://www.cdc.gov/niosh/topics/noise/app.html">NIOSH SLM app</a>, I was able to measure peaks of 102dB from the couch -- around ten feet from the speakers. A <a href="https://shop.p3international.com/products/kill-a-watt">Kill-A-Watt</a> reported bursts of power consumption at 700W. But even the Bryston can clip, I've played some very loud music and caused clipping where the green lights on the front of the app blink red, and even blown a fuse on the Magnepans. A clipped sound wave is essentially DC current at very high wattage, which can cause overheating of the speaker circuitry, the fuse protects against this by sacrificing itself. I use a 250V 3A quick-blow fuse for my Magnepans.</p>
<figure class="graphviz">
<svg width="701pt" height="134pt" viewBox="0.00 0.00 701.00 134.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 130)"><polygon fill="white" stroke="none" points="-4,4 -4,-130 697,-130 697,4 -4,4"/><!-- atv --><g id="node1" class="node"><title>atv</title><polygon fill="none" stroke="black" points="92.25,-90 0,-90 0,-36 92.25,-36 92.25,-90"/><text text-anchor="middle" x="46.12" y="-58.7" font-family="Times,serif" font-size="14.00">Apple TV</text></g><!-- tv --><g id="node2" class="node"><title>tv</title><polygon fill="none" stroke="black" points="182.25,-90 128.25,-90 128.25,-36 182.25,-36 182.25,-90"/><text text-anchor="middle" x="155.25" y="-58.7" font-family="Times,serif" font-size="14.00">TV</text></g><!-- atv&#45;&gt;tv -->
<g id="edge1" class="edge">
<title>atv&#45;&gt;tv</title>
<path fill="none" stroke="black" d="M92.56,-63C100.58,-63 108.86,-63 116.64,-63"/>
<polygon fill="black" stroke="black" points="116.47,-66.5 126.47,-63 116.47,-59.5 116.47,-66.5"/>
</g>
<!-- avr -->
<g id="node3" class="node">
<title>avr</title>
<polygon fill="none" stroke="black" points="363.75,-90 218.25,-90 218.25,-36 363.75,-36 363.75,-90"/>
<text text-anchor="middle" x="291" y="-58.7" font-family="Times,serif" font-size="14.00">Yamaha RX&#45;V750</text>
</g>
<!-- tv&#45;&gt;avr -->
<g id="edge2" class="edge">
<title>tv&#45;&gt;avr</title>
<path fill="none" stroke="black" d="M182.67,-63C189.82,-63 197.95,-63 206.47,-63"/>
<polygon fill="black" stroke="black" points="206.33,-66.5 216.33,-63 206.33,-59.5 206.33,-66.5"/>
</g>
<!-- sub -->
<g id="node4" class="node">
<title>sub</title>
<polygon fill="none" stroke="black" points="519.38,-126 407.62,-126 407.62,-72 519.38,-72 519.38,-126"/>
<text text-anchor="middle" x="463.5" y="-94.7" font-family="Times,serif" font-size="14.00">Rythmik L12</text>
</g>
<!-- avr&#45;&gt;sub -->
<g id="edge3" class="edge">
<title>avr&#45;&gt;sub</title>
<path fill="none" stroke="black" d="M363.81,-78.16C374.62,-80.44 385.72,-82.79 396.4,-85.04"/>
<polygon fill="black" stroke="black" points="395.36,-88.4 405.87,-87.04 396.81,-81.55 395.36,-88.4"/>
</g>
<!-- amp -->
<g id="node5" class="node">
<title>amp</title>
<polygon fill="none" stroke="black" points="527.25,-54 399.75,-54 399.75,0 527.25,0 527.25,-54"/>
<text text-anchor="middle" x="463.5" y="-22.7" font-family="Times,serif" font-size="14.00">Bryston 4B&#45;ST</text>
</g>
<!-- avr&#45;&gt;amp -->
<g id="edge4" class="edge">
<title>avr&#45;&gt;amp</title>
<path fill="none" stroke="black" d="M363.81,-47.84C371.95,-46.12 380.25,-44.37 388.41,-42.64"/>
<polygon fill="black" stroke="black" points="388.88,-46.12 397.95,-40.63 387.44,-39.27 388.88,-46.12"/>
</g>
<!-- speakers -->
<g id="node6" class="node">
<title>speakers</title>
<polygon fill="none" stroke="black" points="693,-54 563.25,-54 563.25,0 693,0 693,-54"/>
<text text-anchor="middle" x="628.12" y="-22.7" font-family="Times,serif" font-size="14.00">Magnepan LRS</text>
</g>
<!-- amp&#45;&gt;speakers -->
<g id="edge5" class="edge">
<title>amp&#45;&gt;speakers</title>
<path fill="none" stroke="black" d="M527.38,-27C535.27,-27 543.42,-27 551.48,-27"/>
<polygon fill="black" stroke="black" points="551.29,-30.5 561.29,-27 551.29,-23.5 551.29,-30.5"/>
</g>
</g>
</svg>
</figure>
<h2 id="punch">Punch</h2>
<p>To celebrate, I made a batch of <a href="https://punchdrink.com/recipes/punch-house-regent-punch/">Regent Punch</a> (<a href="/resources/pdfs/regent-punch-recipe.pdf">printable recipe</a>) and invited a few friends for a listening party. We passed around the Apple TV remote and queued up songs throughout the night. I think the best sounding were songs like Landslide by Fleetwood Mac, with acoustics and clear vocals. Rock songs were a bit harder to appreciate, but dance music like Hung Up by Madonna or Loneliness (Klub Cut) by Tomcraft was carried by the subwoofer. The subwoofer is usually blended at closer to a quarter turn on the volume knob, but I moved it an additional quarter turn as we played more dance music. The Bryston is 29dB gain compared to the Vidar's 26dB, so it may need to be adjusted up to blend correctly anyway. Reception was very positive.</p>
<figure>
<img src="/resources/images/2024-02-16-hifi/punch.jpg" alt="Punch bowl filled with Regent's Punch" />
<figcaption>Punch bowl filled with Regent's Punch</figcaption>
</figure>
<p>The punch is a recipe published by Punch Magazine, an excellent source of cocktail recipes, taken from Will Duncan of Chicago's Punch House. In their article, <a href="https://punchdrink.com/articles/how-well-do-you-know-history-of-punch-recipes/">How Well Do You Know the Flowing Bowl</a>, describes it as a favorite of King George IV. I made the following substitutions due to availability:</p>
<table>
<thead>
<tr>
<th>Original</th>
<th>Used</th>
</tr>
</thead>
<tbody>
<tr>
<td>Batavia arrack</td>
<td>Cachaça -- a Brazilian liquor distilled from sugarcane.</td>
</tr>
<tr>
<td>Hamilton Jamaican Pot Still Gold Rum</td>
<td>Hamilton Jamaican Pot Still Black Rum, the same rum with more artificial coloring. The color of the punch was largely unchanged after addition.</td>
</tr>
<tr>
<td>Champaign</td>
<td>Lamarca Prosecco</td>
</tr>
</tbody>
</table>
<p>In <a href="https://punchdrink.com/articles/batavia-arrack-cocktail-recipes/">How to Use Batavia Arrack in Cocktails</a>, several bartenders express how similar it is to rum:</p>
<blockquote>
<p>It is closest to rhum agricole in comparison</p>
</blockquote>
<blockquote>
<p>It is, essentially, a funky Indonesian ancestor to rum.</p>
</blockquote>
<p>Cachaça, rum and rhum agricole are all distilled from sugarcane.</p>
<p>Additionally, I scaled the recipe by one and a half times, and used fresh juice for each of the lemon, orange, and pineapple juices via the following methods:</p>
<table>
<thead>
<tr>
<th>Fruit</th>
<th>Method</th>
</tr>
</thead>
<tbody>
<tr>
<td>Lemons</td>
<td>I squeeze with <a href="https://www.consumerreports.org/home-garden/best-citrus-squeezers-a1065541201/">Consumer Reports Best Citrus Squeezers</a> Editor's Choice, the Chef'n FreshForce Citrus Juicer -- it's simplest to handle when squeezing the nearly a dozen lemons necessary for this recipe. Each yields about 1/8 cup of juice, or 2 tablespoons.</td>
</tr>
<tr>
<td>Oranges</td>
<td>I use an old Proctor-Silex JUICIT -- a countertop appliance with a spinning reamer and pulp catcher which makes juicing oranges as easy as halving them and pressing them onto the reamer. Each yields about 1/2 cup of juice.</td>
</tr>
<tr>
<td>Pineapple</td>
<td>I peel and chop before transferring to a Vitamix and them straining the slurry through a fine wire strainer -- the remaining pulp is likely good for your digestion. Each yields about 2 cups of juice.</td>
</tr>
</tbody>
</table>
<p>We visited our local Asian Supermarket for green tea. For three cups, I placed three tablespoons of dry green tea in a French Press, heated a kettle of water and then waited for it to cool to 180F (using an instant read thermometer), and combined to steep. After three minutes, the tea is ready to add to strain into the oleo saccharum.</p>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2024-01-02-upgrade-kubeadm</id>
    <title>Fixing expired kubeadm certs</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2024-01-02-upgrade-kubeadm" />
    <published>2024-01-02T00:00:00-05:00</published>
    <summary>Refreshing certs and updating kubeadm</summary>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>I had my kubernetes certificate expire as I was publishing the last blog post, and was able to resolve it by following these steps on Fedora with <code>kubeadm</code>:</p>
<ol>
<li>
<p>Confirm they are expired by running</p>
<pre><code class="language-sh">; kubeadm certs check-expiration
</code></pre>
</li>
<li>
<p>Update the certificates manually by shelling into a control plane node and running:</p>
<pre><code class="language-sh">; kubeadm certs renew all
</code></pre>
</li>
<li>
<p>Now, upgrade to the next version of <code>kubeadm</code> you can update to. Find your version with:</p>
<pre><code class="language-sh">; kubeadm version
kubeadm version: &amp;version.Info{Major:&quot;1&quot;, Minor:&quot;26&quot;, GitVersion:&quot;v1.26.9&quot;, GitCommit:&quot;d1483fdf7a0578c83523bc1e2212a606a44fd71d&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2023-09-13T11:31:28Z&quot;, GoVersion:&quot;go1.20.8&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}
</code></pre>
<p>Then find the last patch version of the next major release:</p>
<pre><code class="language-sh">; yum list available --disablerepo='*' --enablerepo=kubernetes --showduplicates --disableexcludes=kubernetes
</code></pre>
<p>If you see something like:</p>
<pre><code>Errors during downloading metadata for repository 'kubernetes':
- Status code: 404 for https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml (IP: 2607:f8b0:4024:c02::8b)
Error: Failed to download metadata for repo 'kubernetes': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Ignoring repositories: kubernetes
</code></pre>
<p>then update <code>/etc/yum.repos.d/kubernetes.repo</code> (see <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management">k8s package management</a>), replacing the version with the version of the next major release:</p>
<pre><code class="language-sh"># This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
cat &lt;&lt;EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key
EOF
</code></pre>
<p>After which, you must update this file on each machine for each major version upgrade.</p>
</li>
<li>
<p>Install that version:</p>
<pre><code class="language-sh">; sudo yum install -y kubeadm-'1.28.15-*' --disableexcludes=kubernetes
</code></pre>
</li>
<li>
<p>Plan an upgrade:</p>
<pre><code class="language-sh">; sudo kubeadm upgrade plan
</code></pre>
</li>
<li>
<p>Upgrade kubelet:</p>
<pre><code class="language-sh">; sudo yum install -y kubelet-'1.28.15-*' kubectl-'1.28.15-*' --disableexcludes=kubernetes
</code></pre>
</li>
<li>
<p>On each worker node, install the same version of <code>kubeadm</code>, after updating <code>/etc/yum.repos.d/kubernetes.repo</code> as above:</p>
<pre><code class="language-sh">; sudo yum install -y kubeadm-'1.28.15-*' --disableexcludes=kubernetes
</code></pre>
</li>
<li>
<p>Upgrade the node:</p>
<pre><code class="language-sh">; sudo kubeadm upgrade node
</code></pre>
</li>
<li>
<p>Upgrade kubelet:</p>
<pre><code class="language-sh">; sudo yum install -y kubelet-'1.28.15-*' kubectl-'1.28.15-*' --disableexcludes=kubernetes
</code></pre>
</li>
<li>
<p>On a control plane node, apply the version you installed earlier:</p>
</li>
</ol>
<pre><code class="language-sh">; sudo kubeadm upgrade apply v1.28.15
</code></pre>
<ol start="11">
<li>
<p>On all nodes, restart <code>kubelet</code>:</p>
<pre><code class="language-sh">; sudo systemctl restart kubelet.service
</code></pre>
</li>
<li>
<p>On a control plane node, copy the <code>admin.conf</code> to your user's config:</p>
<pre><code class="language-sh">; sudo cp /etc/kubernetes/admin.conf ~/.kube/config
</code></pre>
</li>
<li>
<p>Copy the new kube config to your machine for access:</p>
<pre><code class="language-sh">; mv ~/.kube/config ~/.kube/config.bak
; rsync k1.home.arpa:~/.kube/config ~/.kube/config
</code></pre>
</li>
</ol>
<p>I also had to run</p>
<pre><code class="language-sh">; sudo dnf remove zram-generator-defaults
; sudo swapoff -a
</code></pre>
<p>to permanently disable swap, which was causing <code>kubelet</code> to fail.</p>
<h2 id="scripts">Scripts</h2>
<p>You could run a script like this on the control plane:</p>
<pre><code class="language-sh">#!/usr/bin/env bash
set -euxo pipefail

if ! command -v jq 2&gt;&amp;1 &gt;/dev/null
then
   sudo dnf install --assumeyes --quiet jq
fi

case &quot;$1&quot; in
plan)
   version=$(
      kubeadm version --output json \
      | jq --raw-output '&quot;\(.clientVersion.major).\(.clientVersion.minor | tonumber + 1)&quot;'
   )

   cat &lt;&lt;EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v${version}/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v${version}/rpm/repodata/repomd.xml.key
EOF

   pkg_version=$(
      dnf list available --disablerepo='*' --enablerepo=kubernetes --showduplicates --disableexcludes=kubernetes \
      | awk '{ print $2 }' \
      | grep &quot;${version}&quot; \
      | sort -V \
      | uniq \
      | tail -n 1
   )

   sudo dnf install --assumeyes --quiet &quot;kubeadm-${pkg_version}&quot; --disableexcludes=kubernetes
   sudo kubeadm upgrade plan
   sudo yum install --assumeyes --quiet &quot;kubelet-${pkg_version}&quot; &quot;kubectl-${pkg_version}&quot; --disableexcludes=kubernetes
   ;;
apply)
   # kubeadm has already been upgraded
   version=$(
      kubeadm version --output json \
      | jq --raw-output '.clientVersion.gitVersion'
   )
   sudo kubeadm upgrade apply &quot;${version}&quot;
   sudo systemctl restart kubelet.service
   ;;
*)
   echo &quot;Unkown action $1&quot; &gt;&amp;2
   exit 1
   ;;
esac
</code></pre>
<p>and this on each node:</p>
<pre><code class="language-sh">#!/usr/bin/env bash
set -euxo pipefail

if ! command -v jq 2&gt;&amp;1 &gt;/dev/null
then
   sudo dnf install --assumeyes --quiet jq
fi

case &quot;$1&quot; in
apply)
   version=$(
   kubeadm version --output json \
   | jq --raw-output '&quot;\(.clientVersion.major).\(.clientVersion.minor | tonumber + 1)&quot;'
   )

   cat &lt;&lt;EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v${version}/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v${version}/rpm/repodata/repomd.xml.key
EOF

   pkg_version=$(
   dnf list available --disablerepo='*' --enablerepo=kubernetes --showduplicates --disableexcludes=kubernetes \
   | awk '{ print $2 }' \
   | grep &quot;${version}&quot; \
   | sort -V \
   | uniq \
   | tail -n 1
   )

   sudo dnf install --assumeyes --quiet &quot;kubeadm-${pkg_version}&quot; --disableexcludes=kubernetes
   sudo kubeadm upgrade node
   sudo yum install --assumeyes --quiet &quot;kubelet-${pkg_version}&quot; &quot;kubectl-${pkg_version}&quot; --disableexcludes=kubernetes
   ;;
restart)
   sudo systemctl restart kubelet.service
   ;;
*)
   echo &quot;Unkown action $1&quot; &gt;&amp;2
   exit 1
   ;;
esac
</code></pre>
<p>Execute</p>
<ul>
<li><code>./upgrade-k8s.sh plan</code> on the control node(s) first,</li>
<li>then <code>./upgrade-k8s.sh apply</code> on the worker(s),</li>
<li>then <code>./upgrade-k8s.sh apply</code> on the control plane node(s),</li>
<li>finally <code>./upgrade-k8s.sh restart</code> on the worker(s).</li>
</ul>
<h2 id="rollbacks">Rollbacks</h2>
<p>When upgrading from 1.30 to 1.31, I experienced an issue where all pods began to crash, and <code>kubelet</code> errored with:</p>
<pre><code>Error: services have not yet been read at least once, cannot construct envvars
</code></pre>
<p>This was because in a previous iteration of these instructions, <code>kubelet</code> restarted <em>before</em> the cluster components were upgraded.</p>
<p>To revert, on all machines run:</p>
<pre><code class="language-sh">; cat &lt;&lt;EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
EOF
</code></pre>
<p>with the last major version, then find the right minor version with:</p>
<pre><code class="language-sh">; yum list available --disablerepo='*' --enablerepo=kubernetes --showduplicates --disableexcludes=kubernetes
</code></pre>
<p>with that, run the following on all nodes:</p>
<pre><code class="language-sh">sudo dnf install --assumeyes --quiet &quot;kubeadm-1.30.8-*&quot; --disableexcludes=kubernetes
sudo yum install --assumeyes --quiet &quot;kubelet-1.30.8-*&quot; &quot;kubectl-1.30.8-*&quot; --disableexcludes=kubernetes
sudo systemctl restart kubelet.service
</code></pre>
<p>After downgrading to 1.30.8, all pods stopped crashing.</p>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2024-01-01-ibm-pc-xt</id>
    <title>Discovering the IBM PC XT</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2024-01-01-ibm-pc-xt" />
    <published>2024-01-01T00:00:00-05:00</published>
    <summary>Exploring the IBM PC XT, the sequel to the original PC</summary>
    
    <media:content url="https://connor.zip/resources/images/2024-01-01-ibm-pc-xt/dual-monitor.jpg" medium="image" width="600" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Happy new year! As 2023 comes to a close, I'd like to start documenting the journey I've been on over the last month or two, exploring the IBM PC XT.</p>
<p>In mid-November, I came across a listing for an IBM PC XT alongside its monochrome monitor and Model M keyboard in my local area. I met the seller at his home, and he introduced himself as a researcher at the local medical school (UAMS) who was planning to move to Northwest Arkansas. The PC was actually the driver for a <a href="https://en.wikipedia.org/wiki/IBM_System/36#System/36_Model_5364">System/36 model 5364</a> which he also had for sale, and contained the driver card -- a brown ISA card with a many-pin connector on the back <sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> -- which I asked him to keep for whoever purchased the System/36. He reached out a few weeks later to say someone had driven down from Oklahoma for it, and was very happy to have the controller card. There are examples of a working <a href="https://forum.vcfed.org/index.php?threads/ibm-system-36-5364.69410/">&quot;Desktop 36&quot;</a>, here's a <a href="https://www.youtube.com/watch?v=TxYtD1kLAjU">video</a> of one booting up, but it seems like documentation and software are scarce.</p>
<p>Below is the system as I received it, with only a half-sized 5.25&quot; double-sided, double-density (360k) floppy disk drive and no hard drive. The keyboard is a Model F I had on-hand while I was cleaning the Model M. The floppy drive is not completely functional, generating a 601 on boot and sometimes failing to read media (always failing to boot from media).</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/basic.jpg" alt="An IBM PC XT booted into ROM BASIC" />
<figcaption>An IBM PC XT booted into ROM BASIC</figcaption>
</figure>
<p>The system contains an Intel 8088 without an 8087 floating point coprocessor (they are seldom present), and 512k of RAM populated into a 256k-640k board. The first board revision supports 64k-256k of RAM, while the second supports 256k-640k. The 8088 uses a 20-bit address bus, which can only address 1MB of memory. 640k of this 1MB is available for RAM, while the rest is reserved for use by cards -- this is called &quot;low memory.&quot; On boards with 64k-256k of RAM populated, the remaining 384k can be provided by memory expansion in the I/O channel. Upper memory is used for hardware mapped video memory, the system BIOS, and BIOS expansions such as hard drive controller card ROMs. On later systems this is called <a href="https://wiki.osdev.org/Memory_Map_(x86)#Real_mode_address_space_.28.3C_1_MiB.29">real mode</a>.</p>
<p>The earliest PCs such as the XT had only 8-bit <a href="https://en.wikipedia.org/wiki/Industry_Standard_Architecture">ISA</a> slots, followed by 16-bit longer ISA slots in the PC AT -- 8-bit cards work in 16-bit slots and some 16-bit cards support 8-bit slots. Inside it were:</p>
<ul>
<li>an IBM Monochrome Graphics Adapter card, which includes the TTL (transistor-transistor logic) monitor connector for the IBM Personal Computer Display and a DB25 parallel port for a printer;</li>
<li>a <a href="https://en.wikipedia.org/wiki/Floppy-disk_controller">Floppy Disk Controller</a> card which utilizes an &quot;edge connector&quot; instead of a 34-pin connector as found in later controller cards;</li>
<li>and a <a href="https://en.wikipedia.org/wiki/Synchronous_Data_Link_Control">Synchronous Data Link Control (SDLC)</a> card likely for use with IBM's Systems Network Architecture (SNA).</li>
</ul>
<p>Each card may use an <a href="http://philipstorr.id.au/pcbook/book2/irq.htm">interrupt request</a> (IRQ), an I/O address, and portions of high memory for ROM. They may use multiple, e.g. in the case of multi-purpose cards. These are often configurable either through software or dip switches on the card, and must not conflict. For ROMs, lower addresses are run first, so ROMs can be sequenced. On the XT, only IRQs 0-8 are available, with 0 and 1 taken by the system timer and keyboard. The canonical assignments are:</p>
<table>
<thead>
<tr>
<th>IRQ</th>
<th>Used for</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>RAM refresh and clock tick</td>
</tr>
<tr>
<td>1</td>
<td>Keyboard</td>
</tr>
<tr>
<td>2</td>
<td>Enhanced Graphics Adapter</td>
</tr>
<tr>
<td>3</td>
<td>Serial port <code>COM2</code>/<code>COM4</code></td>
</tr>
<tr>
<td>4</td>
<td>Serial port <code>COM1</code>/<code>COM3</code></td>
</tr>
<tr>
<td>5</td>
<td>Hard disk drive controller</td>
</tr>
<tr>
<td>6</td>
<td>Floppy disk drive controller</td>
</tr>
<tr>
<td>7</td>
<td>Parallel port <code>LPT1</code></td>
</tr>
</tbody>
</table>
<p>See this <a href="https://stanislavs.org/helppc/ports.html">listing of I/O addresses with their usage</a>, so far I've used:</p>
<table>
<thead>
<tr>
<th>I/O Address</th>
<th>Card</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>0x300</code></td>
<td>XT-IDE controller card</td>
</tr>
<tr>
<td><code>0x320</code></td>
<td>Hard drive controller card</td>
</tr>
<tr>
<td><code>0x360</code></td>
<td>Network card</td>
</tr>
</tbody>
</table>
<p>ROM addresses must align on 2k boundaries, but the cards I've seen use 8k ROMs and allow moving to several 8k aligned addresses. Here is an example setup:</p>
<table>
<thead>
<tr>
<th>ROM Address</th>
<th>Card</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>0xCC00</code></td>
<td>XT-IDE controller card</td>
</tr>
<tr>
<td><code>0xCE00</code></td>
<td>High density floppy disk controller card</td>
</tr>
<tr>
<td><code>0xC800</code></td>
<td>Hard drive controller card</td>
</tr>
<tr>
<td><code>0xD000</code></td>
<td>Network card expansion ROM slot</td>
</tr>
</tbody>
</table>
<p>Some ROMs replace the Initial Program Load (<code>INT 0x19</code>), such as the high density floppy controller or XT-IDE; ordering these is important. For instance, to configure the high density floppy controller card, we need its IPL to be last, so it needs to be configured at a higher address than an XT-IDE. In other cases, the XT-IDE ROM needs to be placed higher so that we see its UI on boot instead of having to wait for the high density floppy controller's ROM to attempt to boot from a floppy before passing control.</p>
<p>A program like <a href="https://winworldpc.com/product/checkit/30">CheckIt</a> is useful for determining what IRQs, addresses, etc. are used and by what hardware.</p>
<h2 id="installing-software">Installing Software</h2>
<p>Without a hard drive or a working floppy drive and lack of floppy disks, the first challenge was loading software onto the machine. I utilized two pieces of modern equipment popular in the community:</p>
<ul>
<li>
<p>An <a href="https://users.glitchwrks.com/~glitch/2017/11/23/xt-ide-rev4">XT-IDE rev. 4</a>, which is an ISA card which provides an IDE connector. It utilizes the <a href="https://www.xtideuniversalbios.org/">XTIDE Universal BIOS</a> in ROM to allow booting from IDE media. You can buy these as a kit or <a href="https://www.ebay.com/itm/134715714236?hash=item1f5daeaabc:g:kukAAOSwaeBlduhy">fully assembled</a>. Alongside this you'll need an IDE media, I used a CF to IDE <a href="https://www.ebay.com/itm/114930622407?var=415061305916">adapter</a> (CF cards are uniquely suited to IDE), and a 128MB CF card I had lying around, along with a <a href="https://www.amazon.com/dp/B07KS5SDZN">molex to floppy drive adapter cable</a>.</p>
<p>A <a href="https://www.amazon.com/dp/B06XSSHZ63">USB CF card reader</a> then provides the easiest method to copy files to and from your PC, or to do backups.</p>
</li>
<li>
<p>A Gotek running <a href="https://github.com/keirf/flashfloppy">FlashFloppy</a>, which is a floppy emulator which allows booting from or reading and writing to floppy images on a USB drive. These are available <a href="https://www.ebay.com/itm/234917142656">pre-flashed</a>, or you can acquire a Gotek (note the <a href="https://github.com/keirf/flashfloppy/wiki/Gotek-Models">model</a>) and flash it yourself. You will need a molex to floppy drive power adapter as above, and either an <a href="https://www.ebay.com/itm/126151414945">edge connector to pin adapter</a> or a floppy drive controller and cable with pins. The Gotek connector has no plastic guide, so ensure the pins line up and that the markings for pins 1 and 34 match (on a ribbon cable, the red one is pin one).</p>
<p>Note that an XT cannot read 1.4MB high density 3.5&quot; floppy images without an expansion ROM like the one in <a href="#high-density-floppy-disks">Sergey's floppy controller</a>. This limits any images we use to 360k 5.25&quot; disk images. Be sure to add a <a href="https://github.com/keirf/flashfloppy/wiki/FF.CFG-Configuration-File">config file</a> with something like:</p>
<pre><code>interface=ibmpc
host=pc-dos
display-type=oled-128x64-rotate
</code></pre>
<p>If you have a Macbook like I do, a drive with both <a href="https://www.amazon.com/dp/B07YYK13LF">USB C and USB A connectors</a> is convenient.</p>
</li>
</ul>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/cards.jpg" alt="ISA cards populating an XT, including an XT-IDE" />
<figcaption>ISA cards populating an XT, including an XT-IDE</figcaption>
</figure>
<p>With these installed, it's now possible to load software from the disk image (see this <a href="https://youtu.be/rg81DCacDqg?si=V5r1Ti27gn6N_oW1">video</a> for more context):</p>
<ul>
<li>Copy a disk image such as <a href="https://winworldpc.com/product/pc-dos/3x">IBM PC-DOS 3.30 (5.25)</a> onto our flash drive. You'll need a copy of <a href="/resources/bin//dos/WIPEDISK.EXE"><code>WIPEDISK.EXE</code></a> on that image or another.</li>
<li>Insert the flash drive into our Gotek and power on the XT, you should see our image name displayed on the Gotek (if not, use the dial to locate it); the machine should attempt to boot from the Gotek.</li>
<li>You can simply hit <code>ENTER</code> at the date and time prompts.</li>
<li>Run <code>WIPEDISK.EXE</code> to zero out our CF card. If booting to e.g. MS-DOS 5.0, you'll need to hit F3 to exit the setup utility; simply run <code>SETUP</code> after wiping.</li>
</ul>
<p>The below apply to IBM DOS 3.30, but not later DOS installations with automated setups like MS-DOS 5.0:</p>
<ul>
<li>
<p>Run <code>fdisk</code> and create a primary DOS partition. DOS 3.3 supports a maximum partition size of 32MB, so a later version of DOS is necessary for partitions up to 2GB using FAT16. You may create additional logical partitions to use up more space with additional drive letters.</p>
</li>
<li>
<p>To discover the partition, we must reboot. Then we can run <code>format c: /s</code> to format the drive as a system boot drive. You can also run <code>format c:</code> followed by <code>sys c:</code> for the same effect.</p>
</li>
<li>
<p>Next, we'll need to copy all files to a new DOS folder:</p>
<pre><code class="language-sh">MKDIR C:\DOS
XCOPY A:\*.* C:\DOS
</code></pre>
</li>
</ul>
<p>With an operating system installed, we can now start installing software. For instance, <a href="https://winworldpc.com/product/lotus-1-2-3/2x">Lotus 1-2-3 2.3</a> or <a href="https://winworldpc.com/product/wordperfect/5x-dos">WordPerfect 5.1</a> as period-correct productivity software.</p>
<h3 id="high-density-floppy-disks">High Density Floppy Disks</h3>
<p>High Density disks such as 1.2MB 5.25&quot; floppies or 1.44MB 3.5&quot; floppies require new firmware on an expansion ROM. Sergey's <a href="https://github.com/skiselev/isa-fdc"><code>isa-fdc</code></a> project provides this firmware on a ROM in an ISA floppy drive controller card capable of supporting &quot;IBM PC, AT, and PS/2 floppy types from 160 KB 5.25&quot; single side disks to 2.88 MB 3.5&quot; ED (Extended Density) disks.&quot;</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/sergey-floppy-controller.jpg" alt="Sergey Kiselev's Floppy Disk Controller" />
<figcaption>Sergey Kiselev's Floppy Disk Controller</figcaption>
</figure>
<p>These can be purchase <a href="https://www.ebay.com/itm/283729440716">pre-assembled</a>, although shipping from Bulgaria does take a while. The key for the DIP switches is conveniently printed on the back, initially mine was set to a ROM address which collided with my Seagate ST11R hard drive controller card, so I set switch block one to <code>01000001</code> and two to <code>11101110</code>. This combo uses IRQ 4 for COM1, and uses the corresponding I/O port <code>0x3F8</code>, enables the ROM, makes the EEPROM writable so we can run the setup with F2, and sets the ROM address to <code>0xD0000</code> (what the XT-IDE uses). I also set the motherboard dip switches to reflect one floppy disk, since I only wanted to use a single drive at a time.</p>
<p>Booting the machine with the hard drive controller in place, we can leave the drive empty and boot from our DOS installation on the hard drive. With an empty physical 3.5&quot; drive I get an &quot;Boot failed, error 80&quot; where I can press <code>F</code> to boot from the hard drive -- this could be an issue with the drive itself. If I then insert the disk, I can navigate to <code>A:</code> in DOS and see the files with <code>dir</code>. Funnily enough, the last edited timestamp of each file corresponds with the DOS version -- 6:22AM on May 31st, 1994.</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/dir-hd-floppy.jpg" alt="DIR on a 3.5 inch 1.44MB floppy disk" />
<figcaption><code>DIR</code> on a 3.5 inch 1.44MB floppy disk</figcaption>
</figure>
<p>I tested booting from high density disk both with an XT-IDE using an image of my DOS 6.22 setup disk I created with <code>dd</code>, and a real 3.5&quot; drive with the real disk. Both displayed &quot;Starting MS-DOS&quot; before hanging, which I believe is a <a href="https://forum.vcfed.org/index.php?threads/ms-dos-6-22-setup-hangs-on-ibm-pc-xt-5160.33931/">common problem</a> with the 6.22 installer on the XT. On a different XT with the same amount of memory, I was able to load the 6.22 installer from a Gotek after adding <code>interface=ibmpc</code> to the <code>FF.CFG</code> file.</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/dos-622.jpg" alt="DOS 6.22 Intstaller" />
<figcaption>DOS 6.22 Intstaller</figcaption>
</figure>
<p>In case (like me) you accidentally wipe it attempting to configure another card; the <a href="/resources/bin/dos/floppy-bios-2.2.bin">ROM image</a> can be reflashed using <code>XTIDECFG</code>.</p>
<h3 id="printing">Printing</h3>
<p>Each application provides its own printer drivers, and printing under DOS is reliant on parallel ports. Lotus 1-2-3 configures printers during installation or later by running <code>INSTALL</code>. For a modern printer like an HP LaserJet, I use the Apple LaserWriter driver, which sends PostScript to the printer. Early HP Laser printer drivers may also work since they'll use an older version of HP's control language.</p>
<p>With the LocalTalk PC card and AppleShare PC software described below, you can print to virtual parallel ports which are networked printers. I've successfully printed to my Apple ImageWriter II over LocalTalk directly, and my HP LaserJet with a 635n EIO card over Ethernet via an AsanteTalk bridge.</p>
<p>These parallel ports are available to DOS as well, not just applications with printer drivers. A command like:</p>
<pre><code>C:&gt; DIR &gt;LPT3
</code></pre>
<p>will pipe a directory listing to our third parallel port, in this case a virtual port managed by AppleShare PC and configured to send to the ImageWriter II.</p>
<h2 id="upgrading">Upgrading</h2>
<p>To max out the memory and add a coprocessor, I found an XT motherboard on eBay:</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/motherboard.jpg" alt="IBM Motherboard" />
<figcaption>IBM Motherboard</figcaption>
</figure>
<p>Using a <a href="https://www.amazon.com/dp/B00433SJB2">chip lifter</a>, an indispensable tool when dealing with old hardware, I was able to move the last two banks of memory onto my XT's motherboard. On the 256k-640k boards, the last two banks use the smaller memory chips which are used in the 64k-256k boards. Once those banks were populated, I just needed to toggle <a href="https://www.minuszerodegrees.net/5160/misc/5160_motherboard_switch_settings.htm">dip switches 3 and 4 off</a> and the memory test passed without issue.</p>
<p>The 8087 coprocessor was similarly simple, although lifting longer chips and fitting them into the socket takes plenty of light and patience. Then flip dip switch 2 off. This allows for quicker floating point operations in programs like Lotus.</p>
<p>A <a href="https://en.wikipedia.org/wiki/NEC_V20">NEC V20</a> processor is another great investment, for only <a href="https://www.ebay.com/itm/166438960350">$3.35</a> and some waiting, you can swap out your 8088 for a 80186-compatible and faster processor -- without changing the clock speed.</p>
<h2 id="graphics">Graphics</h2>
<p>IBM Color Displays are expensive, but after a week or two of searching, emailing Craigslist sellers, etc., I found a listing on eBay for an entire IBM PC XT with Hercules Color Card and a CGA Display and made an offer. At the same time, I made an offer on another CGA Display. My luck was such that both were accepted, and I ended up with two working CGA monitors and another entire XT. Around the same time I found another XT with an IBM CGA card, one that'd been stored in a garage and was a bit dirty. By some miracle, both of these XTs worked, were configured at the maximum of 640k of RAM (the first via expansion card, and the second via 640K mainboard), and contained a hard drive controller card and working hard drive.</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/dual-monitor.jpg" alt="Turbo Pascal taking advantage of color and monochrome monitors" />
<figcaption>Turbo Pascal taking advantage of color and monochrome monitors</figcaption>
</figure>
<p>To enable color, you'll need to install a CGA card (e.g. a Hercules Color Card or IBM Color Graphics Adapter), and toggle <a href="https://www.minuszerodegrees.net/5160/misc/5160_motherboard_switch_settings.htm">dip switch</a> 5 (80 column) or 6 (40 column) on the motherboard on. With both an MDA and CGA (or Hercules Color Card), applications can utilize two monitors. With Turbo Pascal 7.0, simply use the <code>/d</code> option. Lotus 1-2-3 can also utilize the color monitor to display graphs, as demonstrated in this <a href="https://www.youtube.com/watch?v=I7syVsEk7dU">video</a>.</p>
<p>CGA graphics are lower resolution than monochrome, so are more useful for graphs or games; while the monochrome monitor is better for text.</p>
<p>Try <a href="https://deathshadow.com/pakuPaku">PAKU PAKU</a> for a game which takes advantage of CGA graphics. You'll notice a little &quot;snow,&quot; which happens when software writes to the video memory while the video memory is being read out to the display. The XT-IDE BIOS also takes advantage of CGA to highlight the boot options in color, but also produces some snow.</p>
<h3 id="monochrome">Monochrome</h3>
<p>On the monochrome side, there are improvements to be made as well. Hercules Graphics, a de-facto standard supported by many applications and cards, adds a bitmapped graphics mode to the functionality provided by the IBM Monochrome Graphics Adapter. Lotus 1-2-3 uses this to depict graphs, while Hercules' <code>HBASIC</code> is a version of BASIC with facilities for drawing graphics to the screen as discussed in this 1983 BYTE Magazine <a href="/resources/pdfs/hercules-graphics-card-review-byte-1983.pdf">review</a>.</p>
<p>I found the card below from a recycler on eBay, but several <a href="https://en.wikipedia.org/wiki/Hercules_Graphics_Card#Clone_boards">clone boards</a> which offer the same functionality are available too:</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/hercules.jpg" alt="Hercules Graphics Card" />
<figcaption>Hercules Graphics Card</figcaption>
</figure>
<p>You may notice the missing chip marked C17, this is where the font ROM should be! The seller was kind enough to refund me and allow me to keep the card, so I'm working on finding this chip. We can tell that the card is at least somewhat functional otherwise though, because the screen is pained with green boxes <em>except</em> where the text would be highlighted. The ROM is the same type used in the MDA card, the Hercules Color Card, and the IBM PC XT motherboard, but with different data. In fact, the data is so similar on the Hercules Color Card that the HGC will use it, but this results in two identical characters stacked on top of each other for each character, because CGA text is lower resolution (8 pixel instead of 14).</p>
<p>The ROM is 24 pin, but is compatible with the AT28C64 28 pin EEPROM when adapted. This means we can use a reflashable, cheaply available ROM in place of the old Font ROM. We can use any card with a writable ROM chip slot (such as the XT-IDE or Sergey's floppy controller) and <code>XTIDECFG</code> to flash the EEPROM from an image, such as <a href="http://martin.hinner.info/old/hgcteam/fonts/english.htm">this one</a>. I ordered a handful of pre-assembled <a href="https://store.go4retro.com/2364-adapter/">2364 adapters</a> along with <a href="https://www.ebay.com/itm/123369670569">AT28C64-12PC</a> EEPROMs; after flashing them using <code>XTIDECFG</code>, the PC still emitted one long beep followed by two short and the display showed the top half of a character but a solid bottom half -- I also noticed some tick mark characters in random positions on screen.</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/hercules-schematic.jpg" alt="Hercules Graphics Card with ROM" />
<figcaption>Hercules Graphics Card with ROM</figcaption>
</figure>
<p>The <a href="https://www.seasip.info/VintagePC/hercplus.html"><em>Plus</em></a> is similar but uses a new chip and on-card RAM to provide <em>RAMFont</em> capabilities.</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/hercules-plus-ad.jpg" alt="Hercules Graphics Card Plus Ad" />
<figcaption>Hercules Graphics Card Plus Ad</figcaption>
</figure>
<h2 id="mfmrll-hard-drives">MFM/RLL Hard Drives</h2>
<p>The second XT also contained a 20MB hard drive and Western Digital Modified Frequency Modulation (MFM) controller card, alongside a full-sized floppy disk drive. As expected, the computer refused to boot from the hard drive, but once reformatted it worked perfectly. Similarly, my third XT came with a 31.5MB ST-238R RLL drive and Seagate ST11R Run Length Limited (RLL) controller card alongside two half-height floppy drives. The hard drive booted once to the existing install before giving out and needing to be reformatted. <a href="https://www.redhill.net.au/d/10.php">MFM and RLL</a> are simply different encodings, where RLL packs more data onto the same area and requires a more accurate hard drive mechanism. When formatting, an RLL drive will use more sectors pre track -- the ST238R can be formatted with 26 sectors per track instead of the MFM's 17.</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/seagate-ad.jpg" alt="Seagate ST-238R Ad" />
<figcaption>Seagate ST-238R Ad</figcaption>
</figure>
<p>These hard drives are &quot;dumb&quot; in that the controller does a &quot;low-level partitioning&quot; of the drive and stores information such as bad sectors, which the manufacturer supplies as a list on the sticker. All hard drives have some defects, but the drive is manufactured such that there is enough room to handle this and still provide the advertised capacity. Controller cards contain a ROM which will contain a copy of a partitioning utility.</p>
<p>To low-level format a drive, we can use <a href="https://kb.iu.edu/d/aaoa"><code>DEBUG</code></a> (included with a DOS install) to branch into the ROM's formatting utility. The address depends on the manufacturer:</p>
<table>
<thead>
<tr>
<th>Manufacturer</th>
<th>Address</th>
</tr>
</thead>
<tbody>
<tr>
<td>Western Digital</td>
<td><code>G=C800:800</code></td>
</tr>
<tr>
<td>Adaptec</td>
<td><code>G=C800:CCC</code></td>
</tr>
<tr>
<td>Omti</td>
<td><code>G=C800:6</code></td>
</tr>
<tr>
<td>Seagate, DTC (Data Technology), etc.</td>
<td><code>G=C800:5</code></td>
</tr>
</tbody>
</table>
<pre><code>A:&gt; debug
- G=C800:800
</code></pre>
<p>See this <a href="/resources/txt/hints-mfm-scsi-drives.txt">hints</a> file for more information on individual drives, such as cylinder counts, etc.</p>
<p>For my Seagate ST-238R, this is how I configured it (this <a href="https://www.lo-tech.co.uk/wiki/Seagate_ST11M_Installation_Guide">guide</a> may help):</p>
<ol>
<li>
<p>From a bootable floppy disk with <code>DEBUG</code> (formatting fails with an XT-IDE installed), run:</p>
<pre><code>A:&gt; DEBUG
- G=C800:5
</code></pre>
<p>This launches the ST11 BIOS v2.0 Hard Disk Initialization Utility, assuming the card <a href="https://minuszerodegrees.net/manuals/Seagate/Seagate%20ST11M%20ST11R%20-%20Jumper%20settings.pdf">jumpers</a> are using the default setting (no connections).</p>
</li>
<li>
<p>Choose the drive to format, I choose 0.</p>
</li>
<li>
<p>Confirm the existing configuration for the drive or input a new one, mine is as follows:</p>
<table>
<thead>
<tr>
<th>Option</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Total cylinders on the drive</td>
<td>615</td>
</tr>
<tr>
<td>Total heads on the drive</td>
<td>4</td>
</tr>
<tr>
<td>Number of sectors per track</td>
<td>26</td>
</tr>
<tr>
<td>Starting write precomp cylinder</td>
<td>616</td>
</tr>
<tr>
<td>Drive model</td>
<td>ST-238R</td>
</tr>
<tr>
<td>Drive serial number</td>
<td>82119697</td>
</tr>
<tr>
<td>Interleave</td>
<td>4</td>
</tr>
</tbody>
</table>
<p>Note the track value is 26, not the standard 17 for MFM, which will allow us to use 32MB instead of 20MB. The optimal interleave should be noted by the utility, SpinRite can also test interleave values to find the optimal.</p>
</li>
</ol>
<h3 id="tools">Tools</h3>
<blockquote>
<p><a href="https://winworldpc.com/product/spinrite/ii">SpinRite</a> is a good alternative to ordinary low-level formatting, because it doesn't destroy the contents of a disk. However, it does not work with many kinds of drives, especially those that use sector translation.</p>
</blockquote>
<p>Controllers like the ST11R cannot be formatted with SpinRite because of sector translation (which SpinRite will detect), but once formatted and partitioned with <code>fdisk</code> SpinRite is able to test them.</p>
<p>SpinRite is an excellent tool for detecting bad sectors -- run the complete analysis after formatting your drive to detect any bad sectors. The analysis will test every sector of the disk and takes several hours. It can also choose the best interleave setting for your drive. Below is a readout of a completed scan:</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/spinrite.jpg" alt="SpinRite scan summary" />
<figcaption>SpinRite scan summary</figcaption>
</figure>
<p>Another option is <a href="https://minuszerodegrees.net/software/Storage%20Dimensions/speedstor.htm">SpeedStor</a>, which can also be used to low-level format disks. It can also create 32MB partitions, if your DOS isn't new enough to support larger ones, but I prefer to use <code>fdisk</code> for that.</p>
<h2 id="xt-ide">XT-IDE</h2>
<p>My card came loaded with the &quot;tiny&quot; version of XUB, which doesn't allow selecting which drive to boot from (it always checks floppy drives first). To swap it, run <code>XTIDECFG</code> and choose the &quot;XT&quot; BIOS, re-flash it, ensure that &quot;SDP Command&quot; is set to &quot;None.&quot; The &quot;auto detect&quot; functionality should detect your card revision information, I/O address, etc. Ensure the &quot;W&quot; dip switch is in the on position when re-flashing, and toggle it off otherwise to avoid accidental overwrite with e.g. driver misconfiguration.</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/xtide.jpg" alt="XT-IDE card with CF card adapter and CF card" />
<figcaption>XT-IDE card with CF card adapter and CF card</figcaption>
</figure>
<p>Some network cards use the same I/O address (0x300) as the XT-IDE uses by default, the first six dip switches on the SW1 block, labeled A9-A4, are the binary encoding of the I/O address. By default they will be (starting at A9) 110000 or 0x30, you can change it to the standard hard drive controller location of 0x320 by switching A5 to on, making 110010. Once this is done, the card <em>must</em> be reflashed. The XUB won't be able to find the CF card, so you must boot from a prepared disk image with <code>XTIDECFG</code>. Once booted, auto-detect should update the address correctly, if not just set it manually to e.g. 320 and 328.</p>
<p>As long as there is no I/O address overlap, XT-IDE cards can be used alongside a hard drive controller cards. During boot, the IDE drive will be drive <code>D:</code> and the hard drive will be <code>C:</code>. By pressing <code>D</code>, you can switch the IDE drive to boot as the primary hard drive, and once booted the hard drive will show up as <code>D:</code>. This is especially handy when initializing a hard drive: after low-level formatting, you can use the formatting tools of your existing DOS installation via <code>fdisk</code> followed by <code>format d: /s</code>. A DOS booted from one disk cannot mark a second disk's primary partition as active, but a PC-DOS 3.3 boot disk's <code>fdisk</code> was able to mark a primary partition active which was created with PC-DOS 2000 and greater than the 32MB limit that PC-DOS 3.3 could create itself. You must mark the partition active or the system won't boot using that disk (in my case it silently continues to BASIC).</p>
<h3 id="mounting">Mounting</h3>
<p>To mount the CF card under macOS, sometimes it will show up as blank. This is because macOS is stricter about FAT16 filesystems than DOS, but it's easy to resolve with <code>repairVolume</code>:</p>
<pre><code class="language-sh"># Identify your CF card's device
; diskutil list external physical
# Replace /dev/disk4 with your disk
; diskutil unmountDisk /dev/disk4
; diskutil repairVolume /dev/disk4s1
; diskutil mountDisk /dev/disk4
</code></pre>
<p>Additionally, macOS will create hidden files such as a trash folder, spotlight index, fsevents folder, and attribute files for individual files. For a given volume name, you can disable some of these:</p>
<pre><code class="language-sh">; sudo mdutil -i off -d /Volumes/MY_DISK
</code></pre>
<p>If you are having difficulty copying a file onto a disk, these files may be the culprit. Use <code>ls -a</code> to view them, and then <code>rm</code> to delete. Under Settings, Privacy &amp; Security, Full Disk Access, you'll need to grant your terminal emulator access so it can delete spotlight indexes.</p>
<h3 id="backups">Backups</h3>
<p>To back up your system, you can use <code>dd</code>:</p>
<pre><code class="language-sh"># Identify your CF card's device
; diskutil list external physical
# Replace /dev/disk4 with your disk
; dd if=/dev/disk4 of=&quot;$HOME/Desktop/xt-backup-20240101.img&quot;
</code></pre>
<p>This <code>.img</code> file can be opened and edited like any floppy disk image. Using an emulator like 86Box, you can even use it with a virtualized IBM PC XT -- unfortunately I've not been able to successfully boot from an image. Using this method, I was able to install <a href="https://winworldpc.com/product/pc-dos/2000">IBM PC-DOS 2000</a>, which is only distributed on 1.4MB 3.5&quot; disks. After installation and backup, I was able to repartition my CF card into a single 128MB partition, and use <code>rsync</code> to copy all the files from that backup image to my current disk image:</p>
<pre><code class="language-sh">; rsync -av /Volumes/NO\ NAME/ /Volumes/IBM\ PC\ XT --exclude=COMMAND.COM
</code></pre>
<p>Before flashing it back onto my CF card:</p>
<pre><code class="language-sh">; sudo dd if=$HOME/Desktop/xt-dos7.img of=/dev/disk4
</code></pre>
<h2 id="hardware">Hardware</h2>
<p>Although I'd never owned a PC with ISA slots, I had accumulated a couple of cards over the years:</p>
<ul>
<li>
<p>an <a href="https://hwmuseum.pp.ua/th99/i/A-B/54545.htm">Advanced Logic Research Floppy/Parallel/Serial card</a> which uses a 34-pin floppy connector, with parallel and serial ports;</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/alr-floppy-controller-card.jpg" alt="Advanced Logic Research Floppy/Parallel/Serial card" />
<figcaption>Advanced Logic Research Floppy/Parallel/Serial card</figcaption>
</figure>
</li>
<li>
<p>a <a href="https://hwmuseum.pp.ua/th99/t/A-B/52327.htm">Best Data Products Modem card</a>, which can be used for fax or data</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/bdp-modem-card.jpg" alt="Best Data Products Modem card" />
<figcaption>Best Data Products Modem card</figcaption>
</figure>
</li>
</ul>
<p>Omitting those discussed in detail above, I also acquired the following cards within the three XTs:</p>
<ul>
<li>
<p>an IBM <a href="https://en.wikipedia.org/wiki/Synchronous_Data_Link_Control">Synchronous Data Link Control (SDLC)</a> card likely for use with IBM's Systems Network Architecture (SNA);</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/sdlc-card.jpg" alt="IBM ASM-SDLC Card" />
<figcaption>IBM ASM-SDLC Card</figcaption>
</figure>
</li>
<li>
<p>an Analog Input Card, which provides a game port as well as a hole matrix for additional components;</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/analog-input-card.jpg" alt="Analog Input Card" />
<figcaption>Analog Input Card</figcaption>
</figure>
</li>
<li>
<p>a Parallel Card for printers;</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/parallel-card.jpg" alt="Parallel Card" />
<figcaption>Parallel Card</figcaption>
</figure>
</li>
<li>
<p>a Floppy Controller Card, each XT included one;</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/floppy-controller-card.jpg" alt="Floppy Controller Card" />
<figcaption>Floppy Controller Card</figcaption>
</figure>
</li>
</ul>
<h3 id="professional-debug-facility">Professional Debug Facility</h3>
<p>The Professional Debug Facility is a Terminate and Stay Resident (TSR) program for DOS which works alongside an ISA card which provides a unmaskable (un-ignorable) interrupt, so that whatever your application or the operating system is doing, the debugger can be invoked.</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/professional-debug-facility.jpg" alt="Professional Debug Facility" />
<figcaption>Professional Debug Facility</figcaption>
</figure>
<p>When invoked, either through a breakpoint in a program or via the card's button, the Resident Debug Tool presents the current execution environment:</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/debugger.jpg" alt="Resident Debug Tool" />
<figcaption>Resident Debug Tool</figcaption>
</figure>
<p>It seems to fill a similar niche to MacsBug on the Macintosh, which can also be installed as a resident program and be invoked with a key press.</p>
<h2 id="networking">Networking</h2>
<p>The IBM PC XT existed during the wild west of LAN networks: LocalTalk, Ethernet, Token Ring, StarLAN, Novell NetWare, etc. We'll focus on LocalTalk (AppleTalk) and Ethernet (IP).</p>
<h3 id="apple-localtalk-pc-card">Apple LocalTalk PC Card</h3>
<p>The Apple LocalTalk PC Card, described in this <a href="https://oldvcr.blogspot.com/2020/07/appleshare-pc-on-ms-dos-and-apple.html">OVCR post</a>, enables printing to AppleTalk printers and mounting of AppleShare drives, but won't work with Netatalk 2.x printers or shares. Another write-up exists in this <a href="https://www.reddit.com/r/VintageApple/comments/l8p184/testing_a_localtalk_pc_isa_card/">Reddit post</a>, and another in this <a href="https://blog.macip.net/a-localtalk-pc-card-a-macintosh-plus-and-a-linux-box/">blog post</a>. There is also information on Corey Anderson's <a href="http://www.the4cs.com/~corin/localtalk/">blog</a>, the apparent genius behind a <a href="http://www.the4cs.com/~corin/cse477/toaster/">talking toaster</a>.</p>
<p>Software:</p>
<ul>
<li>
<p>A <a href="https://archive.org/details/localtalk-pc-rom-appleshare2.0">1.4MB disk image</a> is available, made from 360K disks which I'm still searching for an image of.</p>
<ul>
<li>Since publication, Steve of Mac84 has archived <a href="https://archive.org/details/appleshare-pc-2.0.1-for-ms-dos-apple-localtalk-pc-card-1989/">AppleShare PC 2.0.1 c. 1989</a>, both the two 360k 5.25&quot; disks, and the 720k 3.5&quot; disk. He also archived the <a href="https://archive.org/details/apple-localtalk-pc-floppies-scan-5.25-1987">LocalTalk PC Card Installer and PC LaserWriter Program Disk</a> which came with the card, and <a href="https://archive.org/details/appleshare-pc-msdos-apple-localtalk-pc-card">AppleShare PC 1.0</a>, at the time an optional paid software package -- both c. 1987 and prior to AppleTalk Phase 2.</li>
</ul>
</li>
<li>
<p>Farallon's version of the software is available at the end of <a href="https://blog.macip.net/a-localtalk-pc-card-a-macintosh-plus-and-a-linux-box/">this write-up</a>. This is the only version I've found on 360k floppies as required for the IBM PC XT, by running this install you should generate a config which can be used with another version.</p>
</li>
<li>
<p>Cameron at OVCR shared his <a href="/resources/bin/dos/ASPC.ZIP">files</a> with me. The zip can be unpacked to <code>C:\ASPC</code>, and a <code>NET.CFG</code> file must be added to match your configuration. Appendix C of the <a href="/resources/pdfs/appleshare-pc-user-guide.pdf">User's Guide</a> has details.</p>
</li>
<li>
<p>The manuals are available at <a href="https://bitsavers.org/pdf/apple/mac/developer/AppleShare/">BitSavers</a> -- use <code>rsync</code>, it's quicker:</p>
<pre><code class="language-sh">; rsync -av rsync://bitsavers.org:/bitsavers/pdf/apple/mac/developer/AppleShare/ .
</code></pre>
</li>
</ul>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/apple-localtalk-pc-card.jpg" alt="Apple LocalTalk PC card" />
<figcaption>Apple LocalTalk PC card</figcaption>
</figure>
<p>Farallon's fork of this software places several batch files into <code>C:\PHONENET</code> alongside the programs, <code>ABOTH.BAT</code> loads facilities for both printing and files, but doesn't load <code>DA</code> as a TSR by default (so the Alt+A hotkey doesn't work) -- to do this simply add the <code>/r</code> switch. The <code>CONFIG.SYS</code> file should contain <code>FILES=20</code> or higher. In the Apple version, the loading script is placed directly into <code>AUTOEXEC.BAT</code>, below is an example:</p>
<pre><code>lh C:\aspc\LSL
   if errorlevel 1 goto aspc_err
lh C:\aspc\LTALKP /NAME=LTALK$
   if errorlevel 1 goto aspc_err
lh C:\aspc\ATALK
   if errorlevel 1 goto aspc_err
lh C:\aspc\ASP_WS
   if errorlevel 1 goto aspc_err
lh C:\aspc\ASHARE
   if errorlevel 1 goto aspc_err
lh C:\aspc\MINSES
   if errorlevel 1 goto aspc_err
lh C:\aspc\REDIR
   if errorlevel 1 goto aspc_err
lh C:\aspc\PAP_WS
   if errorlevel 1 goto aspc_err
lh C:\aspc\APRINT
   if errorlevel 1 goto aspc_err
lh C:\aspc\DA /r
   if errorlevel 1 goto aspc_err
lh C:\aspc\ANET AUTO
   if errorlevel 1 goto aspc_err
REM ***  Memory usage for the above programs is approximately 200 K bytes.
goto skip_aspc
:aspc_err
echo *** A fatal error has occurred while loading AppleShare PC. ***
pause *ASPC*
:skip_aspc
</code></pre>
<p>You can use <code>goto skip_aspc</code> above this to only load AppleShare PC under certain circumstances, since it uses around 200KB out of 640KB total. Once loaded, using either Farallon's or Apple's version, the <code>DA</code> command or hotkey will load you into the &quot;desktop accessory.&quot; Using tab and F keys, you can navigate through adding a printer and mounting it as a parallel port, or appleshare volume as a drive letter.</p>
<p>NCSA Telnet reportedly works with AppleTalk, but I have not had success with my Netatalk-based IP Gateway. I found this copy of <a href="/resources/bin/dos/tel23.zip">version 2.3</a>, here's the relevant excerpt from Telnet's <a href="/resources/txt/ncsa-telnet-faq.txt">FAQ</a>:</p>
<pre><code>Can I use Telnet with AppleTalk?


Using an Appletalk network involves some special considerations. First,
you must load the Appletalk driver into memory. Version 1.0 of the
&quot;ATALK.EXE&quot; driver was used in the development of NCSA Telnet.

The second consideration involves the &quot;interrupt=&quot; line. The &quot;interrupt=&quot;
line in your CONFIG.TEL file refers to the software interrupt the
Appletalk driver is using, not the hardware interrupt the card is set to.
For example, if your Appletalk card is set to IRQ2, you should NOT set
the &quot;interrupt=&quot; line to &quot;2&quot;. Instead, the value should be set to the
software interrupt, usually &quot;interrupt=60&quot; or &quot;interrupt=5C&quot;.

Static addressing does not work at the current time in NCSA Telnet 2.3
using the AppleTalk driver. Therefore, NCSA Telnet ignores any IP address
you set in your CONFIG.TEL file, and assigns an IP address to your PC by
the Appletalk gateway.

Some AppleTalk users have been more successful with v2.3.03 of Telnet.
If you would like to try v2.3.03, it's available on our anonymous ftp
server in the /Telnet/DOS/contributions directory.

One of our users wrote:

To load telnet from the dosprompt [nothing telnet-specific in
config.sys or autoexec.bat we use the following sequence:

lsl.com
ltalk.com
atalk.com
ashare.com
compat.com
d:\network\telnet\telbin -n -h d:\network\telnet\config.tel

where all of the atalk stuff would be in the current directory and all
of the telnet stuff is in d:\network\telnet.

broadcast=255.255.255.255
netmask=255.255.255.0
hardware=atalk          # network adapter board (Appletalk)
interrupt=60            # I have an Apple or Farralon card and PhoneNET Talk
			#remember to run COMPAT.COM for NCSA to run on
			# LocalTalk
#interrupt=5C           # I have a TOPS Flashcard

mtu=512                 # maximum transmit unit in bytes
maxseg=512              # largest segment we can receive
rwin=512                # most bytes we can receive without ACK
</code></pre>
<p>The <code>COMPAT.COM</code> file is not loaded by default and is essential. Under Farallon's fork, this is commented out in the batch script, whereas it needs to be added manually with earlier versions.</p>
<p>I've also found copies of <code>PCROUTE</code> 2.24, which I am hosting: <a href="/resources/bin/dos/pcroute2.24.src.tar.Z">source</a>, <a href="/resources/bin/dos/pcroute2.24.tar.Z">binaries</a>. <code>PCROUTE</code> allows an IBM PC to act as a router, bridging between LocalTalk, Ethernet, SLIP, or StarLAN.</p>
<h3 id="ip">IP</h3>
<p>IP Networking is possible on an XT given a card with a <a href="http://packetdriversdos.net/">packet driver</a> and <a href="https://www.brutman.com/mTCP/">mTCP</a>. I was able to find a couple of Intel 8/16 LAN Adapter cards, which are compatible with both 8- and 16-bit ISA slots and configurable on both with <code>SOFTSET.EXE</code>. A copy is available at <code>ftp://ftp.oldskool.org</code>, navigate to <code>pub/misc/Hardware/Intel/8_16 LAN ADAPTER/</code>.</p>
<p>The files <code>e16disk.exe</code> and <code>softset2.exe</code> are self-expanding archives which should be run in a new folder on your XT. In the resulting files from <code>e16disk.exe</code>, the <code>softset.exe</code> program is a stand-along configuration utility for your card, which allows setting a new IRQ and I/O address based on your machine's available options. Also among the files is <code>packet/eth16.com</code>, the packet drive. You may need <code>exp8.com</code> within the FTP folder on an 8088. The <code>softset2.exe</code> files contain a <code>softset2.exe</code> which is nearly identical to <code>softset.exe</code>. Via <code>softset</code>, you can also configure the address of the boot ROM (or disable it) -- the socket supports AT28C64 chips which can be programmed with an XT-IDE board, so could support additional ROM software.</p>
<p>Once the card is configured, you'll need to load the packet driver by executing <code>exp8.com</code> (or <code>exp16.com</code>, add it to <code>AUUTOEXEC.BAT</code>), which will allow the mTCP programs a standard interface. Programs like <code>PCROUTE</code> can also support cards via a packet driver. With mTCP working, you can finally access BBSs, IRC, and even <a href="https://yeokhengmeng.com/2023/03/building-a-dos-chatgpt-client-in-2023/">ChatGPT</a>.</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/intel-etherexpress.jpg" alt="Intel 8/16 LAN Adapter" />
<figcaption>Intel 8/16 LAN Adapter</figcaption>
</figure>
<p>Unfortunately, my cards were defective and failed <code>softset</code>'s diagnostics under several configurations. I reached out the eBay seller who is sending another two cards after running diagnostics themselves, and I'll fill this in as I get TCP/IP working. A third card, which also failed <code>SOFTSET</code>'s diagnostics for the &quot;82586 chip,&quot; did work:</p>
<ol>
<li>
<p>Run <code>SOFTSET.EXE</code> and automatically configure the card. Take note of ROM address, as this may conflict with other cards you have. I use <code>0xCC00</code> for my XT-IDE ROM, <code>0xCE00</code> for my high density floppy controller, <code>0xC800</code> for the MFM hard drive controller, and <code>0xD000</code> for the Intel 8/16 LAN Adapter (empty). Also note the I/O address, I used <code>0x360-0x36F</code> as it is <a href="https://stanislavs.org/helppc/ports.html">listed</a> as for use with a PC Network -- the XTIDE by default uses <code>0x300</code>, and the hard drive controller card uses <code>0x320</code>. The software chose available IRQ 2.</p>
</li>
<li>
<p>Confirm the software I/O address, usually <code>0x60</code> by running</p>
<pre><code>EXP8.COM 0x60
</code></pre>
</li>
<li>
<p>Assuming mTCP is installed at <code>C:\MTCP</code>, add a <code>TCP.CFG</code> <a href="http://wiki.freedos.org/wiki/index.php/Networking_FreeDOS_-_mTCP">configuration file</a> with the following contents:</p>
<pre><code>PACKETINT 0x60
HOSTNAME xt
MTU 1500
</code></pre>
<p>The <code>MTU</code> line configures mTCP to use the maximum packet size according to <a href="https://datatracker.ietf.org/doc/html/rfc894">RFC 894</a>. Other fields are unnecessary if using <code>DHCP</code> (most networks do). mTCP's <code>DHCP</code> updates the configuration with IPs and other lease metadata each time it's run.</p>
</li>
<li>
<p>and <code>EXP8.COM</code> under <code>C:\ETHEREXP</code>, add the following to your <code>AUTOEXEC.BAT</code>:</p>
<pre><code>SET MTCPCFG=C:\MTCP\TCP.CFG
C:\ETHEREXP\EXP8.COM 0x60
C:\MTCP\DHCP.EXE
</code></pre>
<p>This will load the packet driver and run <code>DHCP</code> on each boot. Alternatively, place the last two lines in a batch file and run it only when you wish to connect.</p>
<p>Adding a <code>SLEEP</code> between the packet driver and <code>DHCP</code> may be necessary to resolve an issue where the DHCP client fails to fetch a lease on the first two attempts because the packet driver has not fully initialized.</p>
</li>
</ol>
<p>Once the packet driver is loaded and <code>DHCP</code> is run to establish an IP address, the other mTCP programs will read that info from the updated config file pointed to by <code>MTCPCFG</code>. For instance, we can use <a href="https://www.brutman.com/mTCP/mTCP_Telnet.html"><code>TELNET</code></a> to access other computers across the Internet:</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/telnet-starwars.jpg" alt="Star Wars playing via Telnet" />
<figcaption>Star Wars playing via Telnet</figcaption>
</figure>
<p>Or simply across our local network, such as this Linux VM:</p>
<figure>
<img src="/resources/images/2024-01-01-ibm-pc-xt/telnet-linux.jpg" alt="Linux shell via Telnet" />
<figcaption>Linux shell via Telnet</figcaption>
</figure>
<p>The <a href="https://www.pcorner.com/list/NETWORK/PKTD11.ZIP/SOFTWARE.DOC/"><code>SOFTWARE.DOC</code></a> file from a <a href="http://crynwr.com/">Crynwr</a> distribution lists software that works with packet drivers, and where you could get it in 1993.</p>
<h4 id="telnet">Telnet</h4>
<p>If using a monochrome monitor via either MDA or Hercules, set <code>TERM=pcansi-mono</code> so that programs emit monochrome text (by default <code>TERM=ansi</code>). Otherwise, <code>telnet</code> will map colors to black or white which can result in invisible black-on-black text (for example, the <code>tmux</code> status bar). You may need to install the <code>pcansi-mono</code> terminfo entry on your Linux machine, on Fedora install <code>ncurses-term</code>.</p>
<pre><code>sudo dnf install -y ncurses-term
</code></pre>
<p>You may also need to set <code>LANG=en_US</code> to disable UTF-8. You can have telnet set <code>TERM</code> automatically by adding:</p>
<pre><code>TELNET_TERMTYPE pcansi-mono
</code></pre>
<p>to <code>TCP.CFG</code>, see the <a href="https://github.com/retrohun/mTCP/blob/master/USERDOCS/telnet.txt">mTCP Telnet documentation</a>.</p>
<h5 id="a-bug">A Bug</h5>
<p>In testing <code>telnet</code> with <code>tmux</code>, I uncovered a bug in its auto-margin implementation. Once I tracked the issue down, <code>mbbrutman</code> was able to patch telnet and make a <a href="http://www.brutman.com/mTCP/download/telnet_cptaffe.exe">test version available</a> with the fix. Previously, <code>tmux</code> would trigger the cursor to move down to a new row by drawing the status line across the bottom of the screen. On each update of the status bar (which contains the time), the problem would compound and the text of the screen would move a line further up from the cursor. The following chronicles my experience isolating the issue:</p>
<p>First, I was able to get <code>tmux</code> working over <code>telnet</code> by using a termcap with automatic margins disabled. That way, <code>tmux</code> won't write the last character of the last line until it expects a line wrap, which side-steps our issue. This works either using an existing termcap like <code>vt100-nam</code>, or by crafting a new one:</p>
<pre><code class="language-sh">; infocmp -C -T pcansi-mono &gt; pcansi-mono-nam
; sed -i 's/pcansi-m|pcansi-mono|ibm-pc terminal programs claiming to be ANSI (mono mode):/pcansi-mono-no-am:/' pcansi-mono-nam
; sed -i 's/:am:/:/' pcansi-mono-nam
; tic pcansi-mono-nam
</code></pre>
<p>The <code>tic</code> command will compile the new <code>pcansi-mono-nam</code> terminal definition and place it under a personal terminfo database at: <code>~/.terminfo/p/pcansi-mono-nam</code>. To place it under <code>/usr/share/terminfo</code>, run <code>tic</code> as root. Although <code>tmux</code> will find the terminal definitions under <code>~/.terminfo</code>, <code>screen</code> will not.</p>
<p>Now, if we set <code>TERM=pcansi-mono-nam</code>, <code>tmux</code> would work correctly. We can set <code>TELNET_TERMTYPE</code> to our custom <code>pcansi-mono-nam</code> in <code>TCP.CFG</code> to set it automatically.</p>
<p>The <a href="https://www.gnu.org/software/termutils/manual/termcap-1.3/html_mono/termcap.html#SEC27"><code>termcap</code> documentation on wrapping</a> notes:</p>
<blockquote>
<p>Wrapping means moving the cursor from the right margin to the left margin of the following line. Some terminals wrap automatically when a graphic character is output in the last column, while others do not.</p>
</blockquote>
<p>See also the <a href="https://pubs.opengroup.org/onlinepubs/7908799/xcurses/terminfo.html#tag_002_001_003">OpenGroup <code>terminfo</code> documentation</a>.</p>
<p>The documentation notes that the <code>am</code> indicates the terminal <em>will</em> scroll if a character is placed in the last column of the last line. Using <code>infocmp</code>, we can show that our <code>pcansi-mono</code> terminal does have <code>am</code>, indicating a character <em>should not</em> be placed in the last column of the last line.</p>
<pre><code class="language-sh">; infocmp -C -T pcansi-mono
#	Reconstructed via infocmp from file: /usr/share/terminfo/p/pcansi-mono
# (rmacs/smacs removed for consistency)
pcansi-m|pcansi-mono|ibm-pc terminal programs claiming to be ANSI (mono mode):\
	:am:bs:mi:ms:\
	:co#80:it#8:li#24:\
	:al=\E[L:bl=^G:bt=\E[Z:cd=\E[J:ce=\E[K:cl=\E[H\E[J:\
	:cm=\E[%i%d;%dH:cr=\r:ct=\E[3g:dc=\E[P:dl=\E[M:do=\E[B:\
	:ho=\E[H:kb=^H:kd=\E[B:kh=\E[H:kl=\E[D:kr=\E[C:ku=\E[A:\
	:le=\E[D:mb=\E[5m:md=\E[1m:me=\E[0m:mr=\E[7m:nd=\E[C:\
	:..sa=\E[0;10%?%p1%t;7%;%?%p2%t;4%;%?%p3%t;7%;%?%p4%t;5%;%?%p6%t;1%;%?%p7%t;8%;%?%p9%t;12%;m:\
	:se=\E[m:sf=\n:so=\E[7m:st=\EH:ta=^I:ue=\E[m:up=\E[A:\
	:us=\E[4m:
</code></pre>
<p>The <a href="https://www.gnu.org/software/screen/manual/html_node/Getting-Started.html"><code>screen</code> manual</a> has this to say:</p>
<blockquote>
<p>If your terminal is a “true” auto-margin terminal (it doesn’t allow the last position on the screen to be updated without scrolling the screen) consider using a version of your terminal’s termcap that has automatic margins turned off. This will ensure an accurate and optimal update of the screen in all circumstances. Most terminals nowadays have “magic” margins (automatic margins plus usable last column). This is the VT100 style type and perfectly suited for screen. If all you’ve got is a “true” auto-margin terminal screen will be content to use it, but updating a character put into the last position on the screen may not be possible until the screen scrolls or the character is moved into a safe position in some other way. This delay can be shortened by using a terminal with insert-character capability.</p>
</blockquote>
<p>Confusingly, they suggest <em>removing</em> <code>am</code> if the terminal <em>supports</em> &quot;true&quot; auto-margin, and this is what side-stepped the behavior with <code>tmux</code>.</p>
<p>However, the real issue lied in mTCP's <code>telnet</code> implementation, which should implement &quot;magic margins.&quot; It correctly does not automatically wrap when the final character is written, but it does when certain additional control characters are sent. In the source code, in <code>TELNETSC.CPP</code>:</p>
<pre><code class="language-cpp">// Overhang mode is kind of goofy and I created it based on experimentation I
// did with putty.  Basically, if the cursor is in the last column and you
// print a character there you do not automatically wrap.  You only wrap to
// the first column on the next line if another character gets printed.
// This allows you to put a character in the last column, and then interpret
// a control code such as Backspace, LF or CR while still on that same line.
</code></pre>
<p>From the telnet server host, we can run the following to get a hex dump of the packet contents, including the status line that caused the issue:</p>
<pre><code class="language-sh">sudo tcpdump -i ens160 dst misc.home.arpa and src xt.home.arpa and src port telnet -X
02:11:04.009483 IP misc.home.arpa.telnet &gt; xt.home.arpa.pluribus: Flags [P.], seq 247231475:247231587, ack 1849633838, win 64134, length 112
	0x0000:  4510 0098 c703 4000 4006 5981 0a00 0303  E.....@.@.Y.....
	0x0010:  0a00 02c9 0017 0d8d 0ebc 73f3 6e3f 2c2e  ..........s.n?,.
	0x0020:  5018 fa86 1a56 0000 1b5b 3330 6d1b 5b34  P....V...[30m.[4
	0x0030:  326d 1b5b 3235 3b31 485b 305d 2030 3a69  2m.[25;1H[0].0:i
	0x0040:  7273 7369 2d20 313a 6261 7368 2a20 2020  rssi-.1:bash*...
	0x0050:  2020 2020 2020 2020 2020 2020 2020 2020  ................
	0x0060:  2020 205b 302c 305d 2022 6d69 7363 2e68  ...[0,0].&quot;misc.h
	0x0070:  6f6d 652e 6172 7061 2220 3032 3a31 3120  ome.arpa&quot;.02:11.
	0x0080:  3235 2d4e 6f76 2d32 341b 5b30 3b31 306d  25-Nov-24.[0;10m
	0x0090:  1b5b 3133 3b31 3948                      .[13;19H
</code></pre>
<p>This hex dump is equivalent to:</p>
<pre><code class="language-sh">printf &quot;\e[30m\e[42m\e[25;1H[0] 0:irssi- 1:bash*                      [0,0] \&quot;misc.home.arpa\&quot; 02:13 25-Nov-24\e[0;10m\e[13;19H&quot;
</code></pre>
<p>By testing this captured line with the <code>printf</code> command, I was able to show that the line without the trailing escape sequences did not wrap, but with them it did.</p>
<p>The <a href="https://www.gnu.org/software/screen/manual/html_node/Control-Sequences.html">control sequences</a> are:</p>
<table>
<thead>
<tr>
<th>Sequence</th>
<th>Category</th>
<th>Description</th>
<th><code>tput</code></th>
</tr>
</thead>
<tbody>
<tr>
<td><code>\e[30m</code></td>
<td>Select Graphic Rendition</td>
<td>Set the foreground to black</td>
<td><code>tput setaf 0</code></td>
</tr>
<tr>
<td><code>\e[42m</code></td>
<td>Select Graphic Rendition</td>
<td>Set the background color to green</td>
<td><code>tput setab 2</code></td>
</tr>
<tr>
<td><code>\e[25;1H</code></td>
<td>Direct Cursor Addressing</td>
<td>Set the cursor position to (25, 1), the first character of the last row of the screen</td>
<td><code>tput cup 24 0</code></td>
</tr>
<tr>
<td>...</td>
<td></td>
<td>80 characters of text, placing us at the last character, (25, 80)</td>
<td></td>
</tr>
<tr>
<td><code>\e[0;10m</code></td>
<td>Select Graphic Rendition</td>
<td>Resets all colors using the <code>sgr0</code> value for our terminal</td>
<td><code>tput sgr0</code></td>
</tr>
</tbody>
</table>
<p>Note <code>\e[</code> aka <code>\x1b[</code> aka <code>\033[</code> is known as a Control Sequence Introducer (CSI), so these are sometimes redundantly referred to as CSI sequences. See <a href="https://tldp.org/HOWTO/Bash-Prompt-HOWTO/x405.html">Colours and Cursor Movement with <code>tput</code></a> for more information on producing sequences.</p>
<h2 id="resources">Resources</h2>
<p>Here are a few helpful resources I found during this project:</p>
<ul>
<li><a href="https://forum.vcfed.org/index.php?forums/pcs-and-clones-xt-and-early-at-class-machines.34/">The Vintage Computer Federation Forumns</a></li>
<li><a href="https://www.minuszerodegrees.net/">Minus Zero Degrees</a>, a reference for the IBM 51xx PC family</li>
<li>The <code>#vc</code> (Vintage Computing) IRC channel on <a href="https://www.slashnet.org/">SlashNET</a>, where many knowledgeable folks are often online.</li>
</ul>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>I snapped a photo of the card, for those curious:</p>
<p><img src="/resources/images/2024-01-01-ibm-pc-xt/system36-driver-card.jpg" alt="System/36 IBM PC XT Driver Card" />&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-11-04-mswfw</id>
    <title>Microsoft Windows for Workgroups</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2023-11-04-mswfw" />
    <published>2023-11-06T23:35:00-05:00</published>
    <summary>Installing MS-DOS and Windows 3.1 from old floppies</summary>
    
    <media:content url="https://connor.zip/resources/images/2023-11-04-mswfw/mswfw.png" medium="image" width="800" height="651"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Last Sunday I was able to partake in Little Rock's annual <a href="https://www.arkansascornbreadfestival.com/">Cornbread Festival</a>, which takes place in my neighborhood. We sampled blueberry cornbread, mexican cornbread, cornbread and chili, cornbread and barbeque, corndogs, and mealie bread, among others. On the way back, we stopped by a local thrift store operating out of the bottom of a Methodist church and stumbled upon a collection of floppy disks.</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/disks.jpg" alt="Microsoft Windows for Workgroups on eight disks" />
<figcaption>Microsoft Windows for Workgroups on eight disks</figcaption>
</figure>
<p>Among them were these:</p>
<ul>
<li>Microsoft Windows for Workstations on eight disks</li>
<li>MS-DOS 6.22 on three disks</li>
<li>Sound Blaster 16 on four disks, including one for a text-to-speech program</li>
<li>ATi Mach 64 drivers on three disks</li>
</ul>
<p>alongside seventeen other miscelaneous disks.</p>
<h2 id="imaging">Imaging</h2>
<p>I had obtained a Dell Floppy Drive Module, a USB 3.5&quot; floppy disk drive, from Goodwill some time ago but hadn't any high density disks to test it with. Although rebadged Dell, it is a TEAC FD-05PUB:</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/drive.jpg" alt="TEAC FD-05PUB one-pager" />
<figcaption>TEAC FD-05PUB one-pager</figcaption>
</figure>
<p>It didn't work initially with an Apple USB C to USB adapter on my MacBook Pro, but plugging it into my Dell Monitor which was itself attached to the MacBook over USB C worked. The floppy disks showed up as any other external drive would, and using <code>dd</code> I was able to make some images.</p>
<pre><code class="language-sh">; diskutil list external physical
/dev/disk6 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                            MSWFW1                 *1.5 MB     disk6
</code></pre>
<p>I wrote a little script at <code>~/bin/fdimage</code>:</p>
<pre><code class="language-sh">#!/usr/bin/env bash
set -euo pipefail

name=&quot;&quot;
while getopts &quot;o:&quot; opt; do
  case $opt in
    o)
      name=&quot;$OPTARG&quot;
      ;;
    *)
      echo &quot;Usage fdimage [-o out.dmg]&quot; &gt; /dev/stderr
      exit 1
      ;;
  esac
done

disks=$(diskutil list -plist external physical | plutil -extract 'AllDisksAndPartitions' json - -o -)
ndisks=$(jq -r length &lt;&lt;&lt; &quot;$disks&quot;)

if [[ &quot;$ndisks&quot; -ne 1 ]]
then
  echo &quot;Expected one external disk, but found $ndisks&quot; &gt; /dev/stderr
  diskutil list external physical &gt; /dev/stderr
  exit 1
fi

disk=$(jq -r first &lt;&lt;&lt; &quot;$disks&quot;)

volume=$(jq -r '.VolumeName // empty' &lt;&lt;&lt; &quot;$disk&quot;)
if [[ -z &quot;$name&quot; &amp;&amp; ! -z &quot;$volume&quot; ]]
then
  name=&quot;${volume}.dmg&quot;
fi
if [[ -z &quot;$name&quot; ]]
then
  echo &quot;Disk has no name, provide one with -o&quot; &gt; /dev/stderr
  exit 1
fi

# Unmount disk
dev=$(jq -r .DeviceIdentifier &lt;&lt;&lt; &quot;$disk&quot;)
dev=&quot;/dev/$dev&quot;
mount=$(jq -r '.MountPoint // empty' &lt;&lt;&lt; &quot;$disk&quot;)
if [[ ! -z &quot;$mount&quot; ]]
then
  diskutil umount &quot;$dev&quot;
fi

sudo dd if=&quot;$dev&quot; of=&quot;$name&quot; bs=512 conv=noerror,sync status=progress
</code></pre>
<p>It assumes you'll only have one external disk drive plugged in, and uses <code>diskutil</code> to image the file into a <code>.dmg</code>. The <code>.dmg</code> can be mounted just like any other <code>.dmg</code>, by double clicking it. Usage looks like this, using one of the <code>mach64</code> disks as an example since they don't have a unique name, which forces the use of <code>-o</code>.</p>
<pre><code class="language-sh">; fdimage -o &quot;MACH64 3.dmg&quot;
Volume NO NAME on disk6 unmounted
Password:
  1442304 bytes (1442 kB, 1409 KiB) transferred 64.089s, 23 kB/s
2880+0 records in
2880+0 records out
1474560 bytes transferred in 64.642070 secs (22811 bytes/sec)
</code></pre>
<p>To simplify the imaging of multiple disks in a series which <em>do</em> hae a unique name, I wrote another script <code>~/bin/fdimages</code>:</p>
<pre><code class="language-sh">#!/usr/bin/env bash
set -euo pipefail

# TODO: Disks with no name have no UUID
check_disks() {
  disks=$(diskutil list -plist external physical | plutil -extract 'AllDisksAndPartitions' json - -o -)
  ndisks=$(jq -r length &lt;&lt;&lt; &quot;$disks&quot;)

  if [[ &quot;$ndisks&quot; -eq 1 ]]
  then
    disk=$(jq -r first &lt;&lt;&lt; &quot;$disks&quot;)
    dev=$(jq -r .DeviceIdentifier &lt;&lt;&lt; &quot;$disk&quot;)
    dev=&quot;/dev/$dev&quot;
    name=$(jq -r '.VolumeName // empty' &lt;&lt;&lt; &quot;$disk&quot;)
    uuid=$(diskutil info -plist &quot;$dev&quot; | plutil -extract VolumeUUID raw - || {
      echo &quot;Disk has no UUID, fdimages can't detect disk removal. Use fdimage instead.&quot; &gt; /dev/stderr
      exit 1
    })
  fi
}

prevuuid=&quot;&quot;
while true
do
  check_disks
  if [[ &quot;$ndisks&quot; -gt 1 ]]
  then
    echo &quot;Expected one external disk, but found $ndisks&quot; &gt; /dev/stderr
    diskutil list external physical &gt; /dev/stderr
    exit 1
  fi

  # Wait for a different disk to be inserted
  if [[ &quot;$ndisks&quot; -lt 1 || &quot;$uuid&quot; = &quot;$prevuuid&quot; ]]
  then
    echo &quot;Waiting for new disk to be inserted...&quot; &gt; /dev/stderr
  fi
  while [[ &quot;$ndisks&quot; -lt 1 || &quot;$uuid&quot; = &quot;$prevuuid&quot; ]]
  do
    sleep 1
    check_disks
  done

  echo &quot;Imaging disk $name&quot; &gt; /dev/stderr

  # Unmount disk
  mount=$(jq -r '.MountPoint // empty' &lt;&lt;&lt; &quot;$disk&quot;)
  if [[ ! -z &quot;$mount&quot; ]]
  then
    diskutil umount &quot;$dev&quot;
  fi

  sudo dd if=&quot;$dev&quot; of=&quot;${name}.dmg&quot; bs=512 conv=noerror,sync status=progress
  prevuuid=&quot;$uuid&quot;
done
</code></pre>
<p>The Windows for Workgroups and MS-DOS disks have names and unique UUIDs in <code>diskutil</code>, which makes it simple to detect if a new disk has been inserted and to create unique files for each disk. Here's an example from the Sound Blaster 16 disks:</p>
<pre><code class="language-sh">; mkdir &quot;Sound Blaster 16&quot;
; cd &quot;Sound Blaster 16&quot;
; fdimages
Waiting for new disk to be inserted...
Imaging disk INSTALL
Volume INSTALL on disk6 unmounted
  1442304 bytes (1442 kB, 1409 KiB) transferred 63.385s, 23 kB/s
2880+0 records in
2880+0 records out
1474560 bytes transferred in 63.938421 secs (23062 bytes/sec)
Waiting for new disk to be inserted...
Imaging disk APPLICATION
Volume APPLICATION on disk6 unmounted
  1442304 bytes (1442 kB, 1409 KiB) transferred 64.143s, 22 kB/s
2880+0 records in
2880+0 records out
1474560 bytes transferred in 64.694649 secs (22793 bytes/sec)
Waiting for new disk to be inserted...
Imaging disk ACCESSORIES
Volume ACCESSORIES on disk6 unmounted
  1442304 bytes (1442 kB, 1409 KiB) transferred 63.345s, 23 kB/s
2880+0 records in
2880+0 records out
1474560 bytes transferred in 63.899592 secs (23076 bytes/sec)
Waiting for new disk to be inserted...
Imaging disk T2S DISK
Volume T2S DISK on disk6 unmounted
  1442304 bytes (1442 kB, 1409 KiB) transferred 63.367s, 23 kB/s
2880+0 records in
2880+0 records out
1474560 bytes transferred in 63.919394 secs (23069 bytes/sec)
Waiting for new disk to be inserted...
^C
</code></pre>
<p>In the end we have the following files:</p>
<pre><code>ACCESSORIES.dmg
APPLICATION.dmg
INSTALL.dmg
T2S DISK.dmg
</code></pre>
<p>In writing this I learned about <code>diskutil</code>'s <code>-plist</code> option, which outputs info in a <a href="https://en.wikipedia.org/wiki/Property_list">property list</a> XML format. The tool <code>plutil</code> is handy for extracting info from these or converting the output, but contrary to its <code>man</code> page, when using the <code>json</code> format it will not output to stdout but instead write to a <code>&lt;stdout&gt;</code> file unless <code>-o -</code> is provided. After converting to JSON, we can use the incredibly useful <a href="https://jqlang.github.io/jq/"><code>jq</code></a> utility to wrangle it. I was bitten by <a href="https://github.com/jqlang/jq/issues/354">this issue</a> where <code>jq -r</code> (raw output) outputs nothing for an empty string but the string <code>null</code> for missing properties, which we solve with <code>// empty</code>. I also learned about <code>dd</code>'s <code>status=progress</code> option, which will display live information about the copy operation instead of waiting until the end.</p>
<h2 id="vmware">VMWare</h2>
<p>To test these disks out, I wanted to spin up a VM in VMWare ESXi 6.0. First we need to install DOS 6.22, then install Windows for Workgroups atop it. ESXi has some quirks with the floppy drive: it must be connected on VM power on because it cannot be connected afterwards, it must contain a disk, and the VM will not boot from the hard drive if a floppy disk is present. This necessitates booting from a floppy disk if we wish to use a floppy drive.</p>
<ol>
<li>
<p>Create a new VM. I selected Windows as the Guest OS family and Windows 3.1 as the version.</p>
</li>
<li>
<p>Add a floppy disk drive. Rename the <code>.dmg</code> files we created to <code>.flp</code>, then upload them. Select the firt DOS 6.22 disk. Enable the &quot;Connect&quot; checkbox, the floppy disk cannot be connected after the VM is started.</p>
</li>
<li>
<p>Start the VM, follow the DOS Setup wizard. When it asks for the next disk: suspend the VM, edit settings and select the next disk, then resume the VM.</p>
</li>
<li>
<p>Once DOS is installed, stop the VM. Configure the floppy disk drive with the first DOS disk again. Start the VM.</p>
</li>
<li>
<p>Exit the Setup wizard with F3 from Actions, Guest OS, Send Keys. Then again to confirm.</p>
</li>
<li>
<p>Suspend the VM, edit settings and configure the floppy disk drive with the first Windows for Workgroups disk. Resume the VM.</p>
</li>
<li>
<p>From <code>A:</code> run <code>setup</code></p>
</li>
<li>
<p>Continue through the Windows Setup wizard</p>
</li>
<li>
<p>When it asks for the next disk, once again suspend the VM, edit settings and update the floppy disk drive disk, and resume.</p>
</li>
<li>
<p>Once the wizard concludes, we need to edit the <code>CONFIG.SYS</code> file from MS-DOS:</p>
<p>Navigate to <code>C:\DOS</code>, run <code>edit \CONFIG.SYS</code> (or run <code>edit</code> and using Alt+F, navigate to Open. Navigate up one directory with <code>..</code> and change the <code>*</code> to <code>*.SYS</code>. Highlight <code>CONFIG.SYS</code> and open it).</p>
<p>Edit the path to <code>HIGHMEM.SYS</code> from <code>C:\DOS</code> to <code>C:\WINDOWS</code>. Save the file and exit.</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/config-sys.png" alt="Editing CONFIG.SYS" />
<figcaption>Editing <code>CONFIG.SYS</code></figcaption>
</figure>
</li>
<li>
<p>Shut down the VM. Disconnect the floppy disk drive. If we boot from the DOS disk, we won't have the correct <code>CONFIG.SYS</code> settings loaded for Windows.</p>
</li>
<li>
<p>Start up the VM. Navigate to <code>C:\WINDOWS</code> and run <code>win</code>.</p>
</li>
<li>
<p>Windows 3.1 should start.</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/mswfw.png" alt="Microsoft Windows for Workgroups" />
<figcaption>Microsoft Windows for Workgroups</figcaption>
</figure>
</li>
</ol>
<p>Unfortunately, this setup has neither a working mouse or networking. For Windows NT, an old version of VMWare Tools did the trick, but Windows for Workgroups is too old to have ever been supported. VMWare also reports an error if a sound card is configured.</p>
<h2 id="86box">86Box</h2>
<p>For a better emulated experience of these disks, with emulated hardware, we can use 86Box. I used the <a href="https://github.com/Moonif/MacBox">MacBox</a> user interface for <a href="https://86box.net/">86Box</a>. The 86Box project even has a <a href="https://youtu.be/fDBuXuG7fao?si=vEDLToj-BVbMfVKI">video tutorial</a> on installing Windows for Workgroups with options ideal for sound, video, and networking. They also have an IRC channel, <code>#86Box</code> at <code>irc.ringoflightning.net</code>: connect with TLS on 6697 and register your nick with NickServ, then connect using SASL.</p>
<ol>
<li>
<p>Download the zipped <code>MacBox.app</code> from the latest <a href="https://github.com/Moonif/MacBox/releases">release</a>, unzip it, and copy it into the Applications folder.</p>
</li>
<li>
<p>Open MacBox, you'll need to allow it via the Privacy &amp; Security settings pane.</p>
</li>
<li>
<p>Click the red icon at the bottom left indicating that 86Box is not installed, this will bring you to the Jenkins build page for the latest release for your architecture<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. Click the highlited release, unzip it, and copy it into the Applications folder.</p>
</li>
<li>
<p>Download the necessary ROMs from the latest <a href="https://github.com/86Box/roms/releases">release</a>, unzip it, and copy it into <a href="https://86box.readthedocs.io/en/latest/usage/roms.html"><code>~/Library/Application Support/net.86box.86Box/roms</code></a>.</p>
</li>
<li>
<p>Add a new VM in MacBox, when you attempt to edit settings it will try to open 86Box, you'll need to allow it via the Privacy &amp; Security settings pane.</p>
</li>
<li>
<p>When editing settings:</p>
<ul>
<li>choose the i386DX machine type and the AMI 386DX Clone machine with 16MB of memory,</li>
<li>choose the ATI Mach64 GX video card,</li>
<li>choose the Logitech/Microsoft Bus Mouse,</li>
<li>choose the Sound Blaster 16 sound card,</li>
<li>choose the 3COM EtherLink II network card and <a href="https://86box.readthedocs.io/en/latest/hardware/network.html">SLiRP</a>,</li>
<li>created a new 100MB hard disk,</li>
<li>chose the Western Digital ISA16 hard drive controller,</li>
<li>configured a 3.5&quot; 1.4MB floppy disk (<em>not</em> the PS/2 version)</li>
</ul>
</li>
<li>
<p>On initial boot, you'll need to configure the BIOS with F1. The clone provides a much simpler BIOS interface than some of the earlier machines. Choose standard setup and navigate to C:, enter the cylinders, heads, and sector information from the settings. Choose a 3.5&quot; 1.4MB floppy disk drive for A:.</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/bios.png" alt="BIOS Setup Program" />
<figcaption>BIOS Setup Program</figcaption>
</figure>
</li>
<li>
<p>Open the first MS-DOS floppy disk image (rename it to <code>.flp</code>), then</p>
</li>
<li>
<p>MS-DOS setup will start, swap disks as prompted. (middle click to free the mouse)</p>
</li>
<li>
<p>Boot into DOS, insert the first Windows for Workgroups disks. Navigate to <code>A:</code> and run <code>setup</code>.</p>
</li>
<li>
<p>Follow the setup instrunctions, swapping through Windows disks as needed.</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/windows-setup.png" alt="Windows for Workgroups Setup" />
<figcaption>Windows for Workgroups Setup</figcaption>
</figure>
</li>
<li>
<p>Choose to install the windows network (see the <a href="#tcpip">section</a> below on TCP/IP)</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/windows-setup-networks.png" alt="Windows Setup: Networks" />
<figcaption>Windows Setup: Networks</figcaption>
</figure>
</li>
<li>
<p>Choose the network card driver to install, in our case the 3COM EtherLink II</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/windows-setup-network-adapter.png" alt="Windows Setup: Add Network Adapter" />
<figcaption>Windows Setup: Add Network Adapter</figcaption>
</figure>
<p>and set the address it operates at (available under Settings, Network, then Configure next to the adapter)</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/windows-setup-3com.png" alt="Windows Setup: 3Com EtherLink 16" />
<figcaption>Windows Setup: 3Com EtherLink 16</figcaption>
</figure>
<p>at this point the installer will ask for disks seven and eight after copying some files.</p>
</li>
<li>
<p>Upon reboot, start windows from MS-DOS:</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/dos.png" alt="Start Windows from MS-DOS" />
<figcaption>Start Windows from MS-DOS</figcaption>
</figure>
<p>and you should see the startup screen!</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/windows-start.png" alt="Windows Startup Screen" />
<figcaption>Windows Startup Screen</figcaption>
</figure>
<p>You will get an error about the network card, because SLiRP doesn't work with networks besides IP.</p>
</li>
</ol>
<h3 id="mouse">Mouse</h3>
<p>If, like me, you configured the mouse after installing Windows, follow these instructions:</p>
<ul>
<li>open Windows Setup, confirm that the Mouse entry is empty,</li>
<li>choose the Options menu with Alt+O, choose System Settings,</li>
<li>tab to Mouse, arrow key through the list and choose the Logitech option,</li>
<li>insert Windows for Workgroups disk two as prompted,</li>
<li>reboot.</li>
</ul>
<h3 id="sound-blaster-16">Sound Blaster 16</h3>
<p>Open the the Sound Blaster 16 <code>INSTALL</code> disk, navigate to <code>A:</code> in the File Manager, and open <code>install.exe</code>. The installer will open as a full-screen text installer, follow the direction onscreen and swap out disks accordingly.</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/sound-blaster-install.png" alt="Sound Blaster 16 Installer" />
<figcaption>Sound Blaster 16 Installer</figcaption>
</figure>
<p>The installer performs an install in a DOS-compatible way at the root of the <code>C:</code> drive, before copying some Windows 3.1 specific files into <code>C:\WINDOWS\SYSTEM</code>.</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/sound-blaster-install-summary.png" alt="Sound Blaster 16 Installer Summary" />
<figcaption>Sound Blaster 16 Installer Summary</figcaption>
</figure>
<p>Remove the disk and reboot. Upon reboot, you'll be greeted with the Creative DOS Multimedia Architecture copyright notice and the additional <code>AUOTEXEC.BAT</code> commands in the DOS prompt.</p>
<p>In fact, we can leverage this same <code>AUTOEXEC.BAT</code> file to automatically start Windows for Workgroups when we start our emulator, simply add the line:</p>
<pre><code>C:\WINDOWS\WIN
</code></pre>
<p>to the end with <code>edit autoexec.bat</code>, save and exit, reboot.</p>
<p>Upon first running Windows after Sound Blaster installation, you'll be prompted to create a Sound Blaster 16 program group window, click OK. You should now see the following window populate:</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/sound-blaster-program-group.png" alt="Sound Blaster 16 Program Group" />
<figcaption>Sound Blaster 16 Program Group</figcaption>
</figure>
<p>Open the <code>T2S DISK</code> disk and open it in the File Manager by navigating to <code>A:</code>, double click <code>install.exe</code>. This application is a Windows installer:</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/sound-blaster-install-text-to-speech.png" alt="Sound Blaster Text to Speech Installer" />
<figcaption>Sound Blaster Text to Speech Installer</figcaption>
</figure>
<p>Continue using the default options. Once finished, you'll have the Text to Speech program group:</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/sound-blaster-program-group-text-to-speech.png" alt="Text to Speech Program Group" />
<figcaption>Text to Speech Program Group</figcaption>
</figure>
<p>To hear sounds via the Sound Blaster 16 emulated card, we need to turn up the volume via the Creative Mixer app in the Sound Blaster 16 program group. Slide all sliders to max, and you should hear a sound from e.g. the Test button in Sound under Control Panel.</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/sound-blaster-creative-mixer.png" alt="Creative Mixer" />
<figcaption>Creative Mixer</figcaption>
</figure>
<p>In the emulator settings, navigate to Sound, then Configure and check the Control PC Speaker.</p>
<h3 id="ati-mach-64">ATi Mach 64</h3>
<p>Open the first mach64 disk and exit Windows to return to DOS (installation cannot be run under the Windows DOS Box). Run <code>A:\INSTALL</code>, you should see</p>
<blockquote>
<p>Invalid EEPROM status. Press any key to initialize...</p>
</blockquote>
<p>At this point the system hung, so I reset the emulator and ran install again. This time, I get a wizard:</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/mach64-install.png" alt="ATi Mach 64 Installer" />
<figcaption>ATi Mach 64 Installer</figcaption>
</figure>
<ul>
<li>First, run Quick Setup and choose <code>VESA-std 75 Hz 20&quot; 1280x1020</code> or another large resolution.</li>
<li>Then select Drivers Installation and choose Microsoft Windows and swap to disk two as instructed.</li>
<li>Choose Install the Windows 3.1 Driver</li>
<li>Choose Display drivers and confirm the paths (MVA fails)</li>
<li>Exit setup with ESC</li>
<li>Start Windows with <code>WIN</code> from the DOS prompt</li>
</ul>
<p>Open ATi Dekstop, then the FlexDesk+ control panel, and configure a larger resolution and higher color depth. To take full effect you must reboot. Unfortunately, ATi changes to font to a larger size which is unweildy and a bit ugly.</p>
<h3 id="tcpip">TCP/IP</h3>
<p>To use an IP network (e.g. SLiRP), we need an IP stack. Like the Macintosh System Software releases of this era, an IP stack isn't included as part of the OS. The one I used is <a href="https://winworldpc.com/product/microsoft-tcp-ip-32/tcpip-32-3-11b">Microsoft's TCP/IP-32</a> for Windows for Workgroups, which requires a 32-bit processor like the i386. As described in <a href="http://blog.becker.sc/2012/10/windows-311-for-workgroup-how-to.html"><em>How to Install the TCP/IP Protocol</em></a>, often a TCP/IP stack was included with your web browser such as the one included with the dailer in Internet Explorer 3.02. The article <a href="https://www.fastcompany.com/3053173/what-it-was-like-to-build-a-website-in-1995"><em>What It Was Like to Build a World Wide Web Site In 1995</em></a> mentions the shareware Trumpet Winsock as another popular alternative.</p>
<ul>
<li>
<p>Place the <code>Disk01.img</code> in the virtual floppy drive</p>
</li>
<li>
<p>In File Manager, create a new folder e.g. <code>C:\TCPIP</code></p>
</li>
<li>
<p>Copy the <code>tcp32b.exe</code> from the A drive into our new folder</p>
</li>
<li>
<p>Open the executable, it's a self-extracting archive which will expand in our new directory</p>
</li>
<li>
<p>From the Network program group, open Network Setup</p>
</li>
<li>
<p>Next to the 3Com ethernet adapter, click Drivers, then Add Protocol, choose Unlisted or Updated Protocol and click OK</p>
</li>
<li>
<p>In the dialogue box, type our directory path <code>C:\TCPIP</code>, it will list Microsoft TCP/IP-32 3.11b, click OK.</p>
</li>
<li>
<p>Our new TCP/IP protocol is now installed! Remove the Microsoft NetBEUI protocol since we won't be using it</p>
</li>
<li>
<p>Upon closing the window, a Microsoft TCP/IP Configuration window opens. According to the 86Box docs:</p>
<blockquote>
<p>The virtual router provides automatic IP configuration to the emulated machine through DHCP</p>
</blockquote>
<p>So we can check the Enable Automatic DHCP Configuration box and click OK.</p>
</li>
<li>
<p>At this point you'll be prompted to reboot.</p>
</li>
<li>
<p>After rebooting, we can delete our temporary <code>C:\TCPIP</code> directory.</p>
</li>
</ul>
<p>If all went well, we should now be able to use tools like Telnet:</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/telnet.png" alt="Telnet" />
<figcaption>Telnet</figcaption>
</figure>
<h3 id="microsoft-word-60">Microsoft Word 6.0</h3>
<p><a href="https://winworldpc.com/product/microsoft-word/6x">Microsoft Word 6.0</a> was the version released for Windows 3.1 (with a version for NT), and was distributed on nine 3.5&quot; 1.4MB floppy disks, larger than Windows for Workgroups itself. I've used the significantly smaller Word 4 and 5 on my System 6 and System 7 Macintosh SEs. After a painfully long and dull install process, we can finally lay our eyes upon the word processing behemoth.</p>
<figure>
<img src="/resources/images/2023-11-04-mswfw/word.png" alt="Microsoft Word 6.0" />
<figcaption>Microsoft Word 6.0</figcaption>
</figure>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>For example, I downloaded the highlighted <a href="https://ci.86box.net/job/86Box/">release</a>:</p>
<p><img src="/resources/images/2023-11-04-mswfw/86box-download.png" alt="86Box release" />&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-10-10-vnc</id>
    <title>Vintage VNC</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2023-10-10-vnc" />
    <published>2023-10-10T00:00:00-05:00</published>
    <summary>Configuring VNC on Windows NT and OS X Tiger</summary>
    
    <media:content url="https://connor.zip/resources/images/2023-10-10-vnc/ibook.jpg" medium="image" width="800" height="640"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>I run several VMs on ESXi 6.0, the last version to support the HP ProLiant DL380 G7 it runs on. The VMs are mostly *nix servers, there's also the odd server with a graphical user interface. One example is the Windows NT 4.0 Server<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> VM I spun up to interface with the web UI on an HP DesignJet 650C network card. The card's web interface is constructed entirely of Java Applets, and only functions on IE 4<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> or Netscape Navigator 4.x (with the exception of 4.4), which must be running on Windows. Another is a <a href="https://www.haiku-os.org/">Haiku</a> instance.</p>
<p>I can access these UIs from the VMWare ESXi web interface, which on my local network is at <code>https://vms.home.arpa</code>, but it would be neat to access these instances outside of ESXi as well. Recently, I've been looking at software for the iBook G4<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> I picked up at an estate sale, which runs OS X 10.4.11 Tiger. While I could upgrade to OS X 10.5 Leopard, I likely won't before backing up so I can boot into multiple releases (such as 10.3 Panther, the last to support AppleTalk over Ethernet). Apple provides their own software for remote access, <a href="https://en.wikipedia.org/wiki/Apple_Remote_Desktop">Apple Remote Desktop</a>, version 2 replaced the underlying protocol with VNC. Virtual Network Computing (VNC) is the de-facto standard remote desktop protocol, and is supported by many open client and server implementations. Use <a href="https://www.macintoshrepository.org/15278-apple-remote-desktop-3">ARD 3.0 and the 3.3 upgrade</a> and <a href="http://crackserialnumber.blogspot.com/2013/10/apple-remote-desktop-35-key.html">these keys</a> within the 3.0 installer. Software Update on OS X 10.4 will still locate the 3.4 update and install it.</p>
<figure>
<img src="/resources/images/2023-10-10-vnc/ibook.jpg" alt="iBook G4 running Apple Remote Desktop 3.3" />
<figcaption>iBook G4 running Apple Remote Desktop 3.3</figcaption>
</figure>
<h2 id="winvnc">WinVNC</h2>
<p>On Windows NT, we can use <a href="https://web.mit.edu/cdsdev/src/winvnc.html">WinVNC</a>, which is the original VNC client and server developed by The Olivetti &amp; Oracle Research Lab (ORL) at Cambridge. I installed the version distributed by <a href="https://archive.org/details/tucows_73773_PalmVNC">Palm</a>, and unzipped it with <a href="https://archive.org/details/wz32v800_exe">WinZip 8.0</a>. To get the files onto the VM, since I didn't have a file share set up yet, I ran <code>python -m SimpleHTTPServer 9000</code> within the extracted folder on my laptop and then pointed Internet Explorer 4.0 at that address. Once installed, we can navigate to Start, Programs, Vnc, then WinVNC Server (Install Service). It should prompt you to start it from the Control Panel, navigate to Start, Settings, Control Panel, Services, scroll down to VNC Server and click Start. Then reboot the VM, you should be prompted to set a password on the next login.</p>
<p>From the iBook, we can open Apple Remote Desktop, find the Windows NT machine in the Scanner pane, click Control and fill in the password (no username necessary). The window should display our remote desktop! If you see a black screen, try minimizing and then expanding the window.</p>
<figure>
<img src="/resources/images/2023-10-10-vnc/apple-remote-desktop.png" alt="Windows NT over VNC on Apple Remote Desktop" />
<figcaption>Windows NT over VNC on Apple Remote Desktop</figcaption>
</figure>
<p>From a Macbook Pro running a modern version of macOS, we can simply press ⌘-K from the Finder and type in our server's address, e.g. <code>vnc://nt.home.arpa</code>. The system will prompt for a password and then display the Windows NT desktop. You can press fullscreen and change the Windows NT display properties to the largest supported size, 2560x1600, for an immersive experience.</p>
<figure>
<img src="/resources/images/2023-10-10-vnc/macos.png" alt="Windows NT over VNC on macOS Sonoma's VNC client" />
<figcaption>Windows NT over VNC on macOS Sonoma's VNC client</figcaption>
</figure>
<h2 id="esxis-vnc-server">ESXi's VNC Server</h2>
<p>We can also do this natively from ESXi, using its VNC server</p>
<ul>
<li>
<p>First, we need to enable the SSH service, this can be done from the web or console interface.</p>
</li>
<li>
<p>Next, connect to the server over SSH</p>
</li>
<li>
<p>Modify the firewall service file to be editable:</p>
</li>
<li>
<pre><code class="language-sh">; chmod 644 /etc/vmware/firewall/service.xml
; chmod +t /etc/vmware/firewall/service.xml
</code></pre>
</li>
<li>
<p>Edit it:</p>
<pre><code class="language-sh">; vi /etc/vmware/firewall/service.xml
</code></pre>
<p>and add the following entry:</p>
<pre><code class="language-xml">&lt;service id='0042'&gt;
   &lt;id&gt;vnc&lt;/id&gt;
   &lt;rule id='0000'&gt;
      &lt;direction&gt;inbound&lt;/direction&gt;
      &lt;protocol&gt;tcp&lt;/protocol&gt;
      &lt;porttype&gt;dst&lt;/porttype&gt;
      &lt;port&gt;
         &lt;begin&gt;5900&lt;/begin&gt;
         &lt;end&gt;5999&lt;/end&gt;
      &lt;/port&gt;
   &lt;/rule&gt;
   &lt;enabled&gt;true&lt;/enabled&gt;
   &lt;required&gt;true&lt;/required&gt;
&lt;/service&gt;
</code></pre>
<p>Each VM will require its own host on the ESXi server, which is why we use a port range.</p>
</li>
<li>
<p>Reload the firewall:</p>
<pre><code class="language-sh">; esxcli network firewall refresh
</code></pre>
<p>and ensure it's running:</p>
<pre><code class="language-sh">; esxcli network firewall ruleset list | grep vnc
</code></pre>
</li>
</ul>
<p>Now shut down the VM you'd like to configure VNC for. Edit its settings, and under advanced add the following:</p>
<table>
<thead>
<tr>
<th>Field</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>RemoteDisplay.vnc.enabled</code></td>
<td><code>TRUE</code></td>
</tr>
<tr>
<td><code>RemoteDisplay.vnc.port</code></td>
<td><code>5900</code></td>
</tr>
<tr>
<td><code>RemoteDisplay.vnc.password</code></td>
<td><code>hunter2</code></td>
</tr>
</tbody>
</table>
<p>Start the VM and you should now be able to connect ARD to the ESXi server's IP on the port configured above. For each additional VM, just change the port. I was able to connect ARD and even watch the boot-up sequence of the server, but the image was yellowed (apparently a product of the bit depth of the image) and not very high resolution. It also disabled the console in ESXi's web interface.</p>
<figure>
<img src="/resources/images/2023-10-10-vnc/esxi-vnc.jpg" alt="ESXi's VNC over Apple Remote Desktop 3.3" />
<figcaption>ESXi's VNC over Apple Remote Desktop 3.3</figcaption>
</figure>
<h2 id="apple-remote-desktop">Apple Remote Desktop</h2>
<p>We can also access our iBook over VNC<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, because OS X 10.4 comes with its own VNC software installed. To enable it, find Apple Remote Desktop under Sharing in the System Settings:</p>
<figure>
<img src="/resources/images/2023-10-10-vnc/ibook-sharing.png" alt="OS X Tiger Sharing Settings" />
<figcaption>OS X Tiger Sharing Settings</figcaption>
</figure>
<p>then under Access Priviledges, select the users to enable remote access for and check at least Observe.</p>
<figure>
<img src="/resources/images/2023-10-10-vnc/ibook-ard.png" alt="OS X Tiger ARD Access Priviledges" />
<figcaption>OS X Tiger ARD Access Priviledges</figcaption>
</figure>
<p>Finally, connect to the iBook from our client. In macOS we can do this via ⌘-K by entering the VNC address <code>vnc://ibook.home.arpa</code> or the Network pane in Finder since OS X advertises this feature via Bonjour. The above screenshots were taken from the macOS Sonoma VNC viewer.</p>
<p>Now I can do my OS X Tiger exploration from the comfort of a modern MacBook Pro. I do have to leave the iBook lid open; even when plugged in, the iBook will sleep if the lid is closed, reportedly unless it has a monitor and keyboard connected.</p>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>The images for Windows NT Server 4.0 I used are titled <code>winnt40_x86en_entsrv.d1.iso</code> and <code>winnt40_x86en_entsrv.d2.iso</code>, they are available from the <a href="https://archive.org/details/winnt40_x86en_entsrv.d1">Internet Archive</a>. To run it under VMWare ESXi, you also need VMWare Tools 3.5, which is still available from <a href="https://packages.vmware.com/tools/esx/3.5latest/windows/x86/VMware-tools-windows-3.5.0-988599.iso">VMWare</a>, without which the mouse will not function and the screen resolution will be rather limited.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>A copy of the Internet Explorer 4.0 install disk is available on the <a href="https://archive.org/details/ie4-win95-winnt">Internet Archive</a>.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>To create a new admin account on an iBook G4 for which you don't know the existing admin password, boot into single-user mode by holding down ⌘-S while booting. Then run:</p>
<pre><code class="language-sh">mount -rw /
rm /var/db/.AppleSetupDone
reboot
</code></pre>
<p>Upon reboot, the system will detect the absense of the <code>.AppleSetupDone</code> file and run through the initial setup wizard again, allowing you to set up a new admin account.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>More information available in this <a href="https://www.dssw.co.uk/blog/2007-05-14-a-vnc-server-is-included-in-mac-os-x-104/">blog post</a>.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-09-30-resize-vm-disk</id>
    <title>Resizing a Fedora VM Disk</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2023-09-30-resize-vm-disk" />
    <published>2023-09-20T00:00:00-05:00</published>
    <summary>How to extend a Fedora VM disk to fill the available space when using LVM and XFS</summary>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Writing this down since I had such a hard time finding all the pieces. This procedure is for expanding the disk of a Fedora (in this case Fedora Server 35) virtual machine using LVM and XFS (the default). I use VMs on ESXi, but this should apply to any Fedora VM scenario.</p>
<ol>
<li>
<p>Expand the disk in your virtualization system, in ESXi I chose the VM, then click Edit, then increase the size next to the Hard Disk.</p>
<figure>
<img src="/resources/images/2023-09-30-resize-vm-disk/esxi-resize-disk.png" alt="Resizing a disk in ESXi" />
<figcaption>Resizing a disk in ESXi</figcaption>
</figure>
</li>
<li>
<p>Use <code>lsblk</code> to determine which disk holds your root filesystem:</p>
<pre><code class="language-sh">; lsblk
</code></pre>
<p>On my system, it outputs the following table:</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>Major:Minor</th>
<th>Rm</th>
<th>Size</th>
<th>Read-Only</th>
<th>Type</th>
<th>Mount Points</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>sda</code></td>
<td>8:0</td>
<td>0</td>
<td>32GB</td>
<td>0</td>
<td>disk</td>
<td></td>
</tr>
<tr>
<td>↳<code>sda1</code></td>
<td>8:1</td>
<td>0</td>
<td>1G</td>
<td>0</td>
<td>part</td>
<td>/boot</td>
</tr>
<tr>
<td>↳<code>sda2</code></td>
<td>8:2</td>
<td>0</td>
<td>15G</td>
<td>0</td>
<td>part</td>
<td></td>
</tr>
<tr>
<td>  ↳ <code>fedora_fedora-root</code></td>
<td>253:0</td>
<td>0</td>
<td>15G</td>
<td>0</td>
<td>lvm</td>
<td>/</td>
</tr>
</tbody>
</table>
<p>plus the CD-ROM and swap partitions.</p>
<p>This table gives us some valuable information. The <code>/</code> filesystem we want to expand is an Logical Volume Manager (LVM) volume called <code>root</code> within a Volume Group called <code>fedora_fedora</code>. The logical volume group is atop the physical partition <code>/dev/sda2</code>, which is a partition of the disk <code>/dev/sda</code>. We can see the effects of our expansion at the disk level, but we need to propagate it through the partiton, the LVM Physical Volume within our Volume Group, the LVM Logical Volume, and the fileystem.</p>
</li>
<li>
<p>Use <code>parted</code> to increase the partition size of <code>/dev/sda</code><sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>.</p>
<pre><code class="language-sh">; sudo parted /dev/sda
</code></pre>
<p>Within <code>parted</code>, use <code>print</code> to list your volumes:</p>
<pre><code>(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 34.4GB
Sector size (logical/physical): 512B/512B
Partiton Table: msdos
Disk Flags:
</code></pre>
<p>It will then print the following table:</p>
<table>
<thead>
<tr>
<th>Number</th>
<th>Start</th>
<th>End</th>
<th>Size</th>
<th>Type</th>
<th>File system</th>
<th>Flags</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1049kB</td>
<td>1075MB</td>
<td>1074MB</td>
<td>primary</td>
<td>xfs</td>
<td>boot</td>
</tr>
<tr>
<td>2</td>
<td>1075MB</td>
<td>17.2GB</td>
<td>16.1GB</td>
<td>primary</td>
<td></td>
<td>lvm</td>
</tr>
</tbody>
</table>
<p>Resize partiton number two, which backs LVM, with the size reported next to <code>Disk /dev/sda:</code> above:</p>
<pre><code>(parted) resizepart 2 34.4GB
</code></pre>
<p>Print the updated table:</p>
<pre><code>(parted) print
</code></pre>
<table>
<thead>
<tr>
<th>Number</th>
<th>Start</th>
<th>End</th>
<th>Size</th>
<th>Type</th>
<th>File system</th>
<th>Flags</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1049kB</td>
<td>1075MB</td>
<td>1074MB</td>
<td>primary</td>
<td>xfs</td>
<td>boot</td>
</tr>
<tr>
<td>2</td>
<td>1075MB</td>
<td>34.4GB</td>
<td>33.3GB</td>
<td>primary</td>
<td></td>
<td>lvm</td>
</tr>
</tbody>
</table>
<p>Then exit:</p>
<pre><code>(parted) quit
</code></pre>
<p>Now running <code>lsbk</code> outputs the following:</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>Major:Minor</th>
<th>Rm</th>
<th>Size</th>
<th>Read-Only</th>
<th>Type</th>
<th>Mount Points</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>sda</code></td>
<td>8:0</td>
<td>0</td>
<td>32GB</td>
<td>0</td>
<td>disk</td>
<td></td>
</tr>
<tr>
<td>↳<code>sda1</code></td>
<td>8:1</td>
<td>0</td>
<td>1G</td>
<td>0</td>
<td>part</td>
<td>/boot</td>
</tr>
<tr>
<td>↳<code>sda2</code></td>
<td>8:2</td>
<td>0</td>
<td>31G</td>
<td>0</td>
<td>part</td>
<td></td>
</tr>
<tr>
<td>↳ <code>fedora_fedora-root</code></td>
<td>253:0</td>
<td>0</td>
<td>15G</td>
<td>0</td>
<td>lvm</td>
<td>/</td>
</tr>
</tbody>
</table>
</li>
<li>
<p>Extend the LVM Physical Volume<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> by running:</p>
<pre><code class="language-sh">; sudo pvresize /dev/sda2
Physical volume &quot;/dev/sda2&quot; changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
</code></pre>
<p>This will reflect the available space in <code>/dev/sda2</code> as free space within the LVM Volume Group.</p>
</li>
<li>
<p>Extend the logical volume by running:</p>
<pre><code class="language-sh">; sudo lvextend -l+100%FREE fedora_fedora/root
</code></pre>
<p>which extends the LVM Logical Volume <code>root</code> within the Volume Group <code>fedora_fedora</code>.</p>
</li>
<li>
<p>Extend the filesystem<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>:</p>
<pre><code class="language-sh">; sudo xfs_growfs /
...
data blocks changed from 3931136 to 8125440
</code></pre>
<p>at this point <code>df -h</code> will show the new size of our filesystem.</p>
</li>
</ol>
<p>I attempted to use <a href="https://github.com/bradfitz/embiggen-disk"><code>embiggen-disk</code></a> first, but <code>embiggen-disk /</code> complained that the MBR was an unknown type <code>8e</code>. The type <code>8e</code> is an LVM volume.</p>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>See the Red Hat documentation on <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_file_systems/partition-operations-with-parted_managing-file-systems#proc_resizing-a-partition-with-parted_partition-operations-with-parted">resizing a partition with <code>parted</code></a>.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>This <a href="https://serverfault.com/a/424682">serverfault answer</a> provided the steps for extending the physical volume and logical volume.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>See the Red Hat documentation on <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/xfsgrow">increasing the size of an xfs file system</a>.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-09-05-znc</id>
    <title>ZNC</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2023-09-05-znc" />
    <published>2023-09-05T00:00:00-05:00</published>
    <summary>Configuring ZNC</summary>
    
    <media:content url="https://connor.zip/resources/images/2023-09-05-znc/macintosh-wallops.jpg" medium="image" width="800" height="640"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>I've been an off-and-on IRC user for years, and running <a href="/posts/2023-09-01-system-6-online/">Wallops on System 6</a> reminded me how helpful these text-based communities can be. On my MacBook I've used <a href="https://irssi.org/">IRSSI</a>, but I'm not an avid enough IRC-er to remember the commands. I've also used the <a href="https://matrix.org/">Matrix</a> bridge, which has its own challenges and has recently been <a href="https://matrix.org/blog/2023/07/deportalling-libera-chat/">disabled</a>. At least one large IRC network, Mozilla, has <a href="https://wiki.mozilla.org/IRC">migrated</a> from IRC to Matrix as of March 2020.</p>
<p>Lately I've used <a href="https://www.codeux.com/textual/">Textual</a> which is paid but also <a href="https://github.com/Codeux-Software/Textual">open source</a>. There are some issues with IRC that Matrix solved though, namely losing the history every time your client sleeps or restarts. The age-old solution to this is <a href="https://wiki.znc.in/ZNC">ZNC</a>.</p>
<p>I set up ZNC on the Fedora VM I use for miscelaneous services, <code>misc.home.arpa</code>. First, we can install <code>znc</code> from Fedora's package repos:</p>
<pre><code class="language-sh">sudo dnf install znc
</code></pre>
<p>Then, we need to run the init command:</p>
<pre><code class="language-sh">sudo -u znc znc --makeconf
</code></pre>
<p>This will ask you a series of questions such as the admin username and password, nick, etc. I set up a dedicated <code>admin</code> user with a random password (thanks <a href="https://1password.com/">1Password</a>) and left most things blank. It's best practice to have a dedicated admin account, and then create a standard user account for yourself to use with your IRC client. When choosing a port, I choose <code>6697</code> as proposed by <a href="https://www.rfc-editor.org/rfc/rfc7194">RFC 7194</a> and enabled SSL.</p>
<p>As the wizard warns, some browsers will not open <code>:6697</code>. I run Nginx on this server, so I added <code>/etc/nginx/conf.d/znc.conf</code> to reverse proxy to it from <code>:443</code> when the host is <code>znc.home.arpa</code>:</p>
<pre><code>server {
    listen       80;
    listen       [::]:80;
    server_name  znc.home.arpa;
    root         /usr/share/nginx/html;

    return 301 https://$host$request_uri;
}

# Settings for a TLS enabled server.
server {
    listen       443 ssl http2;
    listen       [::]:443 ssl http2;
    server_name  znc.home.arpa;
    root         /usr/share/nginx/html;

    ssl_certificate &quot;/etc/pki/nginx/server.crt&quot;;
    ssl_certificate_key &quot;/etc/pki/nginx/private/server.key&quot;;
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  10m;
    ssl_ciphers PROFILE=SYSTEM;
    ssl_prefer_server_ciphers on;

    location / {
        proxy_pass https://127.0.0.1:6697/;
        proxy_set_header Host &quot;127.0.0.1&quot;;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
    }

    error_page 404 /404.html;
    location = /404.html {
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    }
}
</code></pre>
<p>You need a couple of things for this to work:</p>
<ul>
<li>
<p>Some unique DNS (e.g. <code>znc.home.arpa</code>) should point to your server's IP, I set this up via pfSense under Services, DNS Resolver, then at the bottom under Host Overrides I added a new record which points to the static IP I have DHCP configured to assign my VM:</p>
<figure>
<img src="/resources/images/2023-09-05-znc/pfsense-host-override.png" alt="pfSense Host Overrides" />
<figcaption>pfSense Host Overrides</figcaption>
</figure>
</li>
<li>
<p>A cert that your computer trusts, or you can skip TLS altogether. I use <a href="https://github.com/cloudflare/cfssl">cfssl</a> to manage my TLS certificates. You'll need to create a CA<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> and Intermediate CA, which I placed in a <code>certs</code> folder, and then a CSR file like this for your server:</p>
<pre><code class="language-json">{
    &quot;CN&quot;: &quot;misc.home.arpa&quot;,
    &quot;key&quot;: {
        &quot;algo&quot;: &quot;rsa&quot;,
        &quot;size&quot;: 2048
    },
    &quot;names&quot;: [
        {
            &quot;C&quot;: &quot;US&quot;,
            &quot;ST&quot;: &quot;Arkansas&quot;,
            &quot;L&quot;: &quot;Little Rock&quot;,
            &quot;O&quot;: &quot;Heavy Computer&quot;,
            &quot;OU&quot;: &quot;Heavy Computer Registry&quot;
        }
    ],
    &quot;hosts&quot;: [
        &quot;misc.home.arpa&quot;,
        &quot;cups.home.arpa&quot;,
        &quot;znc.home.arpa&quot;,
        &quot;localhost&quot;,
        &quot;10.0.3.3&quot;
    ]
}
</code></pre>
<p>I use the same certificate for all services on the VM, but you can be more granular if you'd like. To generate the cert from the CSR, run:</p>
<pre><code class="language-sh">; cfssl gencert -ca ../../intermediate-ca.pem -ca-key ../../intermediate-ca-key.pem -config ../../cfssl.json -profile=server misc.home.arpa.json | cfssljson -bare misc.home.arpa-server
</code></pre>
<p>Then copy the public key along with its intermediate public key to form a chain:</p>
<pre><code class="language-sh">; cat misc.home.arpa-server.pem ../../intermediate-ca.pem | pbcopy
</code></pre>
<p>And place it at <code>/etc/pki/nginx/server.crt</code> on your server, then copy the key file:</p>
<pre><code class="language-sh">; cat misc.home.arpa-server-key.pem | pbcopy
</code></pre>
<p>And place it at <code>/etc/pki/nginx/private/server.key</code>. Then adjust the permissions appropriately, so that Nginx can use it:</p>
<pre><code class="language-sh">; sudo ls -l /etc/pki/nginx/server.crt /etc/pki/nginx/private/server.key
-r--------. 1 nginx nginx 1676 Sep  5 16:49 /etc/pki/nginx/private/server.key
-r--------. 1 nginx nginx 3165 Sep  5 16:48 /etc/pki/nginx/server.crt
</code></pre>
</li>
</ul>
<p>You'll also need the firwall to allow HTTP/HTTPS traffic to Nginx:</p>
<pre><code class="language-sh">; sudo firewall-cmd --permanent --add-service=http
; sudo firewall-cmd --permanent --add-service=https
; sudo firewall-cmd --reload
</code></pre>
<p>And we should enable the ZNC service:</p>
<pre><code class="language-sh">; sudo systemctl enable --now znc.service
</code></pre>
<p>A quirk of ZNC on my system is that <code>systemctl stop znc.service</code> doesn't work, you need to <code>pkill znc</code> and then <code>systemctl start znc.service</code> to restart it.</p>
<p>Now, we should have a UI available at <code>https://znc.home.arpa</code>! From there, I logged in as <code>admin</code> and did the following:</p>
<ul>
<li>Under Global Settings, we can add a new Listen Port that only the UI uses (only HTTP and IPv4 are checked), say <code>6668</code>, which won't be exposed to the network -- set Bind Host to <code>127.0.0.1</code> to signify this. Then we can edit our Nginx config to use <code>http://127.0.0.1:6668/</code> as our upstream, so that we can only access it via Nginx.</li>
<li>We can update our <code>6697</code> port so that it doesn't have the HTTP checkbox.</li>
<li>We can add a <code>6667</code> port for unencrypted connections from clients like Wallops.</li>
</ul>
<figure>
<img src="/resources/images/2023-09-05-znc/znc-listen-ports.png" alt="ZNC Listen Ports" />
<figcaption>ZNC Listen Ports</figcaption>
</figure>
<p>We also want our clients to be able to talk to ZNC over TLS without self-signed cert errors, so we can use our same <code>cfssl</code> certs from Nginx for ZNC by copying them to <code>/etc/pki/znc/server.crt</code> and <code>/etc/pki/znc/private/server.key</code> and then <code>chown -R znc:znc /etc/pki/znc</code> so that ZNC can read them. Then, we can update the config at <code>/var/lib/znc/.znc/configs/znc.conf</code> with:</p>
<pre><code>SSLCertFile = /etc/pki/znc/server.crt
// SSLDHParamFile = /etc/pki/znc/server.crt
SSLKeyFile = /etc/pki/znc/private/server.key
</code></pre>
<p>At this point you may need to restart ZNC (in my case with <code>pkill znc</code> followed by <code>systemctl start znc.service</code>).</p>
<p>Now, we should expose these ports to the network via firewall:</p>
<pre><code class="language-sh">; sudo firewall-cmd --permanent --add-service=ircs
; sudo firewall-cmd --permanent --add-service=irc
; sudo firewall-cmd --reload
</code></pre>
<p>Next, you should add a new user for yourself in the UI. Then, under that user add a Network. I use Liberachat, with these options:</p>
<table>
<thead>
<tr>
<th>Hostname</th>
<th>Port</th>
<th>SSL</th>
<th>Password</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>irc.libera.chat</code></td>
<td>6697</td>
<td>Checked</td>
<td>Blank</td>
</tr>
</tbody>
</table>
<p>I use <a href="https://libera.chat/guides/certfp">CertFP</a> to authenticate, and for that we'll need to check the <code>cert</code> module's box on both the User and Network page. Then, go to the User Modules &gt; Certificate page and paste your certificate.</p>
<p>We'll also need to enable the <a href="https://wiki.znc.in/Sasl"><code>sasl</code></a> module on the Network page, which is required for CertFP to work for some networks, and from our client run <code>/msg *sasl Mechanism EXTERNAL</code> from each network.</p>
<p>Once connected, you can check that the certificate is being used by running <code>/whois &lt;your username&gt;</code> (after you've authenticated via another method), where you should see</p>
<pre><code>cptaffe has client certificate fingerprint ...
</code></pre>
<p>If so, you can add that fingerprint to your account with <code>/msg NickServer CERT ADD</code>. Once your certificate is added to your account, and your SASL is set to <code>EXTERNAL</code>, you can reconnect with <code>/msg *status jump</code> to ensure you are correctly authenticated.</p>
<p>To connect to ZNC, configure your client's username and password fields:</p>
<table>
<thead>
<tr>
<th>Username</th>
<th>Password</th>
</tr>
</thead>
<tbody>
<tr>
<td>the ZNC <code>{username}/{network}</code> e.g. <code>cptaffe/libera</code></td>
<td>the ZNC password</td>
</tr>
</tbody>
</table>
<p>You can configure multiple connections, one for each network.</p>
<h3 id="time-zone">Time Zone</h3>
<p>There is a <a href="https://github.com/znc/znc/issues/1779">bug</a> in ZNC where server time is not round-tripped appropriately if the server isn't running in UTC. It's good practice to run servers in UTC anyway, we can swap our system time to UTC:</p>
<pre><code class="language-sh">; sudo timedatectl set-timezone UTC
</code></pre>
<p>and then reboot. As described in this <a href="https://news.ycombinator.com/item?id=32326145">comment</a>, the playback module uses timestamps to determine what messages to deliver, so if the server time isn't synced properly then you may get duplicate messages or gaps.</p>
<p>I ran into this issue initially since my server was in my local time zone, which resulted in duplicate messages from the buffer on reconnect, and missing self-messages when reconnecting. This was most apparent in Palaver since it must reconnect so often.</p>
<h2 id="keeping-a-nick">Keeping a Nick</h2>
<p>In most cases the initial connection will authenticate you and your nick will be assigned to you. In rare cases, for instance when your connection is interrupted, you may not be able to claim your nick because the server still has it assigned to your old connection. Enabling the <code>keepnick</code> module will configure ZNC to keep trying to get your original nick. In the case of a disconnection, your nick will be available soon when the old connection times out on the server end. I've only experienced this when connecting over Tor, but it's likely good practice generally.</p>
<h3 id="cloaks">Cloaks</h3>
<p>To prevent users seeing your IP address, you can use a cloak.</p>
<p>On <a href="https://libera.chat/guides/cloaks">Libera</a>, just <code>/join #libera-cloak</code> and send <code>!cloakme</code>, now your <code>/whois {user}</code> response should look like:</p>
<pre><code>[14:22:25] cptaffe has userhost ~ZNC@user/cptaffe and real name &quot;Connor Taffe&quot;
</code></pre>
<p>On <a href="https://www.oftc.net/UserCloaks/">OFTC</a>, send <code>/msg NickServ SET CLOAK ON</code>.</p>
<p>On <a href="https://ergo.chat/about-network">Ergo</a>, users are automatically cloaked:</p>
<blockquote>
<p>By default, all hostnames on ergo.chat are cryptographically “cloaked” so that your IP address information is not visible to other users (although it is visible to server administrators).</p>
</blockquote>
<p>They note, you can connect even more anonymously:</p>
<blockquote>
<p>If you would like to anonymize your connection against the administrators as well, we are accessible via the Tor network, although you may be banned from some channels until you register a nickname:</p>
<table>
<thead>
<tr>
<th>Field</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Host</td>
<td>vrw7zcuarwx4oeju3iikiz3jffrvuijsysyznqf53mxizxrebomfnrid.onion</td>
</tr>
<tr>
<td>Port</td>
<td>6667</td>
</tr>
<tr>
<td>SSL/TLS</td>
<td>false</td>
</tr>
</tbody>
</table>
</blockquote>
<h2 id="on-system-6">On System 6</h2>
<p>To get Wallops to connect, I needed to create a new account with only one channel configured so that the joins wouldn't overwhelm the Macintosh SE. Wallops also doesn't have a username field where we could pass the network, so we can pass it in the password field as:</p>
<pre><code>{username}/{network}:{password}
</code></pre>
<p>With that, Wallops can connect to ZNC and by proxy use CertFP for authentication!</p>
<figure>
<img src="/resources/images/2023-09-05-znc/macintosh-wallops.jpg" alt="Wallops on a Macintosh SE" />
<figcaption>Wallops on a Macintosh SE</figcaption>
</figure>
<p>We can also use the <code>chanfilter</code> module to hide channels so that Wallops only sees one channel, enabling us to use a single account. See the section below on <a href="#multiple-clients">multiple clients</a>.</p>
<p>First, add a new client id to <code>chanfilter</code>, I called it <code>wallops</code>:</p>
<pre><code>/msg *chanfilter AddClient wallops
</code></pre>
<p>Then, join from an IRC client which can handle all your channels using the client identifier. You can place the identifier in your username: for Textual I use <code>shared@wallops/libera</code>, but for Palaver (which has a dedicated network field for ZNC) I use <code>shared@wallops</code>. Once connected, leave all channels you wish to hide from <code>wallops</code> (all but one) -- the <code>/part</code> will be intercepted by <code>chanfilter</code> (on a second <code>/part</code>, ZNC will leave the channel).</p>
<p>To check that the channels you want to hide are hidden:</p>
<pre><code>/msg *chanfilter ListChans wallops
</code></pre>
<p>Now in Wallops, our password field will look like:</p>
<pre><code>{username}@{client identifier}/{network}:{password}
</code></pre>
<p>for example, <code>user@wallops/libera:hunter2</code>.</p>
<h2 id="over-the-internet">Over the Internet</h2>
<p>To access our ZNC bouncer outside of the network, we need to creat a NAT rule. I followed these <a href="https://docs.netgate.com/pfsense/en/latest/nat/port-forwards.html">instructions</a> to port-forward 6697 from my DNS address to my VM via pfSense, then followed these <a href="https://docs.netgate.com/pfsense/en/latest/recipes/port-forwards-from-local-networks.html">instructions</a> to enable NAT Reflection so I could reach it from inside my network as well as outside.</p>
<p>I quickly realized that my internal certs won't work with my external DNS name, so I decided to scratch port-forward in favor of using the HAProxy plug-in. I've already configured it to work with the ACME plug-in to automatically issue and renew certificates for my domain names with Let's Encrypt. To do this, we add a new backend in HAProxy, <code>znc</code>, with:</p>
<table>
<thead>
<tr>
<th>Forward to</th>
<th>Address</th>
<th>Port</th>
<th>Encrypt (SSL)</th>
<th>SSL Checks</th>
</tr>
</thead>
<tbody>
<tr>
<td>Address+Port</td>
<td>10.0.3.3</td>
<td>6667</td>
<td>No</td>
<td>No</td>
</tr>
</tbody>
</table>
<p>We also need to tune the timeout settings because IRC connectios are long-lived and often quiet, unlike HTTP connections. The timeout must be longer than the interval between <code>PING</code>s. I set mine to one day:</p>
<table>
<thead>
<tr>
<th>Timeout</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Connection timeout</td>
<td>Blank</td>
</tr>
<tr>
<td>Server timeout</td>
<td>86400000</td>
</tr>
<tr>
<td>Retries</td>
<td>Blank</td>
</tr>
</tbody>
</table>
<p>From the HAProxy <a href="https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration#timeout-connect-timeout-client-timeout-server">docs</a>:</p>
<blockquote>
<p>The <code>timeout connect</code> setting configures the time that HAProxy will wait for a TCP connection to a backend server to be established. The <code>timeout client</code> setting measures inactivity during periods that we would expect the client to be speaking, or in other words sending TCP segments. The <code>timeout server</code> setting measures inactivity when we’d expect the backend server to be speaking. When a timeout expires, the connection is closed.</p>
</blockquote>
<p>We forward to the unencrypted port to avoid the extra SSL overhead on our local network, but SSL can be used as well. Then create a new front-end <code>irc-6697</code> with:</p>
<table>
<thead>
<tr>
<th>Listen Address</th>
<th>Custom Address</th>
<th>Port</th>
<th>SSL Offloading</th>
</tr>
</thead>
<tbody>
<tr>
<td>WAN address (IPv4)</td>
<td></td>
<td>6697</td>
<td>Yes</td>
</tr>
</tbody>
</table>
<p>We also need to tune the timeout settings again:</p>
<table>
<thead>
<tr>
<th>Timeout</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Client timeout</td>
<td>86400000</td>
</tr>
</tbody>
</table>
<p>Then under Actions, choose Use Backend and the <code>znc</code> backend.</p>
<p>Next, under Firewall &gt; Rules, create a new rule:</p>
<table>
<thead>
<tr>
<th>Field</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Action</td>
<td>Pass</td>
</tr>
<tr>
<td>Interface</td>
<td>WAN</td>
</tr>
<tr>
<td>Address Family</td>
<td>IPv4</td>
</tr>
<tr>
<td>Protocol</td>
<td>TCP</td>
</tr>
<tr>
<td>Source</td>
<td>Any</td>
</tr>
<tr>
<td>Destination</td>
<td>This firewall (self)</td>
</tr>
<tr>
<td>Destination Port Range</td>
<td>Choose <code>(other)</code>, then 6697 for both to and from, since we're using a single port.</td>
</tr>
<tr>
<td>Description</td>
<td>IRC traffic to HAProxy</td>
</tr>
</tbody>
</table>
<p>Now we can easily configure our clients to use our external DNS address.</p>
<p>With our public ZNC service, we can use IRC on the move. I installed <a href="https://palaverapp.com/">Palaver</a> on my iPhone and configured it against my IRC bouncer to do just that.</p>
<h2 id="multiple-clients">Multiple Clients</h2>
<p>Using multiple clients on a single ZNC account can introduce issues with the playback buffer not forwarding to all clients, so that the history is choppy in any one clinet. I asked on <code>#palaver</code> on <code>irc.ergo.chat</code> (which can be set up just like Libera, and supports CertFP), and <code>kylef</code> gave me this advice:</p>
<blockquote>
<p>I would recommend the <code>znc-playback</code> module to solve the history sync per device, depending on which other clients you use and if they support it though.
Make sure that the &quot;auto clear&quot; buffer features are not enabled in ZNC otherwise it will clear buffers on each connection.
Having separate users would work but its complicated and this would solve it.
As for push notifications, I would recommend installing the <code>clientaway</code> module, then configure all your clients to auto away you when you are not there.
That provides the best experience as then when you are using one client actively, your other devices are not receiving messages you've read.</p>
</blockquote>
<p>Under &quot;Your Settings&quot; for the user your clints login as, uncheck &quot;Auto Clear Chan Buffer&quot; and &quot;Auto Clear Query Buffer.&quot; You may need to do this for each channel under each network as well, if there are any already configured.</p>
<p>You'll also want to enable <code>route_replies</code> for all networks, so that client request responses (such as <code>/who</code>, etc.) are routed to the client which sent the request. See the <a href="https://wiki.znc.in/Multiple_clients">Multiple Clients</a> wiki page.</p>
<h3 id="playback">Playback</h3>
<p>To install the <a href="https://wiki.znc.in/Playback">playback</a> module, we follow the directions in <a href="https://wiki.znc.in/Compiling_modules">compiling modules</a>:</p>
<pre><code>; git clone https://github.com/jpnurmi/znc-playback.git
; cd znc-playback/
</code></pre>
<p>We need the <code>znc-buildmod</code> command, available in the developement package:</p>
<pre><code class="language-sh">; sudo dnf install znc-devel
</code></pre>
<p>Now we can build the module:</p>
<pre><code class="language-sh">; znc-buildmod playback.cpp
</code></pre>
<p>Now we have <code>playback.so</code>, we can place it in a <code>.znc/modules</code> directory:</p>
<pre><code class="language-sh">; sudo mkdir /var/lib/znc/.znc/modules
; sudo mv playback.so /var/lib/znc/.znc/modules
; sudo chown -R znc:znc /var/lib/znc/.znc/modules
; sudo chmod 700 /var/lib/znc/.znc/modules
; sudo chmod 700 /var/lib/znc/.znc/modules/playback.so
</code></pre>
<p>Now to <a href="https://wiki.znc.in/Modules#(Un)Loading_Modules">load the module</a> you can either message <code>*status</code> if you are an admin:</p>
<pre><code>/msg *status LoadMod --type=global playback
</code></pre>
<p>or edit <code>/var/lib/znc/.znc/configs/znc.conf</code> directly, and add:</p>
<pre><code>LoadModule = playback
</code></pre>
<p>After restarting ZNC, the module should appear.</p>
<h3 id="palaver">Palaver</h3>
<p>To install the <a href="https://wiki.znc.in/Palaver"><code>znc-palaver</code></a> module which will enable push notifications to Palaver, we can follow similar directions:</p>
<pre><code class="language-sh">; git clone https://github.com/cocodelabs/znc-palaver
; cd znc-palaver/
; znc-buildmod palaver.cpp
; sudo mv palaver.so /var/lib/znc/.znc/modules
; sudo chown znc:znc /var/lib/znc/.znc/modules/palaver.so
; sudo chmod 700 /var/lib/znc/.znc/modules/palaver.so
</code></pre>
<p>Now either message <code>*status</code> if you are an admin:</p>
<pre><code>/msg *status LoadMod --type=global palaver
</code></pre>
<p>or edit <code>/var/lib/znc/.znc/configs/znc.conf</code>, and add:</p>
<pre><code>LoadModule = palaver
</code></pre>
<p>Upon restarting <code>znc</code>, you should see a &quot;Connected!&quot; push notification come through Palaver, you can also run <code>/msg *palaver info</code> for connected device info.</p>
<h3 id="client-away">Client Away</h3>
<p>For the <a href="https://wiki.znc.in/Clientaway"><code>clientaway</code></a> module, instructions are very similar:</p>
<pre><code class="language-sh">; git clone https://github.com/kylef-archive/znc-contrib.git
; cd znc-contrib/
; znc-buildmod clientaway.cpp
; sudo mv clientaway.so /var/lib/znc/.znc/modules
; sudo chown znc:znc /var/lib/znc/.znc/modules/clientaway.so
; sudo chmod 700 /var/lib/znc/.znc/modules/clientaway.so
</code></pre>
<p>This module is configure per-user instead of globally, so when messaging <code>*status</code> (admin not required):</p>
<pre><code>/msg *status LoadMod --type=user clientaway
</code></pre>
<p>Or via the config file, the <code>LoadModule clientaway</code> statement is added under a user in the config (or toggled on in the Web UI after a restart):</p>
<pre><code>&lt;User your-user&gt;
    ...
    LoadModule clientaway
</code></pre>
<p>You may also need to enable this per-network, via the UI or <code>*status</code> on each network:</p>
<pre><code>/msg *status LoadMod --type=network clientaway
</code></pre>
<h3 id="chan-filter">Chan Filter</h3>
<p>This module is helpful if you don't want all channels to be visible on all clients, see the section on <a href="#on-system-6">System 6</a> for usage.</p>
<p>For the <a href="https://wiki.znc.in/Chanfilter"><code>chanfilter</code></a> module, instructions are very similar:</p>
<pre><code class="language-sh">; git clone https://github.com/jpnurmi/znc-chanfilter.git
; cd znc-chanfilter/
; znc-buildmod chanfilter.cpp
; sudo mv chanfilter.so /var/lib/znc/.znc/modules
; sudo chown znc:znc /var/lib/znc/.znc/modules/chanfilter.so
; sudo chmod 700 /var/lib/znc/.znc/modules/chanfilter.so
</code></pre>
<h3 id="xmpp">XMPP</h3>
<p>The <a href="https://github.com/kylef-archive/znc-xmpp"><code>znc-xmpp</code></a> module adds an XMPP (Jabber) interface to ZNC, for XMPP clients like iChat on mid-2000s versions of OS X. The module requires the <code>libxml2</code> library and headers, install it via:</p>
<pre><code class="language-sh">; sudo dnf install libxml2-devel
</code></pre>
<p>then proceed as usual:</p>
<pre><code class="language-sh">; git clone https://github.com/kylef-archive/znc-xmpp.git
; cd znc-xmpp
</code></pre>
<p>The C++ compiler (my version of <code>g++</code> and <code>clang++</code>) complains about the use of <code>vector&lt;...&gt;</code> without a <code>using namespace std;</code> statement. I ran <code>grep vector -r src/</code> to find all usages of <code>vector</code> and changed them to <code>std::vector</code>.</p>
<pre><code class="language-sh">; make
; sudo mv xmpp.so /var/lib/znc/.znc/modules
; sudo chown znc:znc /var/lib/znc/.znc/modules/xmpp.so
; sudo chmod 700 /var/lib/znc/.znc/modules/xmpp.so
</code></pre>
<p>When connected as an admin user (no network required), message <code>*status</code>:</p>
<pre><code>/msg *status LoadMod --type=global xmpp znc.home.arpa
</code></pre>
<p>where <code>znc.home.arpa</code> is the host I want it to listen for XMPP connections (default is localhost).</p>
<p>Now we'll need to add a firewall rule:</p>
<pre><code class="language-sh">; sudo firewall-cmd --permanent --new-service xmpp
; sudo firewall-cmd --permanent --service xmpp --add-port 5222/tcp
; sudo firewall-cmd --permanent --add-service xmpp
; sudo firewall-cmd --reload
</code></pre>
<h2 id="tor">Tor</h2>
<p>You can access networks more anonymously via Tor, several networks have onion services:</p>
<table>
<thead>
<tr>
<th>Network</th>
<th>Onion Service</th>
</tr>
</thead>
<tbody>
<tr>
<td>Libera</td>
<td><code>libera75jm6of4wxpxt4aynol3xjmbtxgfyjpu34ss4d7r7q2v5zrpyd.onion</code> as <code>palladium.libera.chat</code></td>
</tr>
<tr>
<td>OFTC</td>
<td><code>oftcnet6xg6roj6d7id4y4cu6dchysacqj2ldgea73qzdagufflqxrid.onion</code> as <code>irc.oftc.net</code></td>
</tr>
<tr>
<td>Ergo</td>
<td><code>vrw7zcuarwx4oeju3iikiz3jffrvuijsysyznqf53mxizxrebomfnrid.onion</code> as <code>irc.ergo.chat</code></td>
</tr>
</tbody>
</table>
<p>Add <code>/etc/yum.repo.d/tor.repo</code> as:</p>
<pre><code>[tor]
name=Tor for Fedora $releasever - $basearch
baseurl=https://rpm.torproject.org/fedora/$releasever/$basearch
enabled=1
gpgcheck=1
gpgkey=https://rpm.torproject.org/fedora/public_gpg.key
cost=100
</code></pre>
<p>then run</p>
<pre><code class="language-sh">; sudo dnf install tor
</code></pre>
<p>Adding mappings allows us to connect using TLS without certificate pinning, since the host name will match the certificate. TLS is required for CertFP authentication, which is required over Tor by Libera. Edit <code>/etc/tor/torrc</code> to include:</p>
<pre><code>MapAddress palladium.libera.chat libera75jm6of4wxpxt4aynol3xjmbtxgfyjpu34ss4d7r7q2v5zrpyd.onion
MapAddress irc.oftc.net oftcnet6xg6roj6d7id4y4cu6dchysacqj2ldgea73qzdagufflqxrid.onion
MapAddress irc.ergo.chat vrw7zcuarwx4oeju3iikiz3jffrvuijsysyznqf53mxizxrebomfnrid.onion
</code></pre>
<p>then run</p>
<pre><code class="language-sh">; sudo systemctl enable --now tor.service
</code></pre>
<p>Since ZNC doesn't support SOCKS proxies natively, you'll need <code>proxychains</code>:</p>
<pre><code class="language-sh">; sudo dnf install proxychains-ng
</code></pre>
<p>which by default is configured to proxy to Tor over <code>localhost:9050</code> using SOCKS4.</p>
<p>Then change the <code>ExecStart</code> line in <code>/usr/lib/systemd/system/znc.service</code> to:</p>
<pre><code>ExecStart=/usr/bin/proxychains /usr/bin/znc -f
</code></pre>
<p>and add</p>
<pre><code>Requires=tor.service
</code></pre>
<p>Then, you should update your networks in ZNC to point to the <code>MapAddress</code>'d addresses above:</p>
<table>
<thead>
<tr>
<th>Network</th>
<th>Address</th>
</tr>
</thead>
<tbody>
<tr>
<td>Libera</td>
<td><code>palladium.libera.chat</code></td>
</tr>
<tr>
<td>OFTC</td>
<td><code>irc.oftc.net</code></td>
</tr>
<tr>
<td>Ergo</td>
<td><code>irc.ergo.chat</code></td>
</tr>
</tbody>
</table>
<p>At first I had OFTC mapped to <code>graviton.oftc.net</code>, but they load balance between several servers. When it moved to <code>dacia.oftc.net</code>, TLS stopped working and my connection dropped. Since <code>irc.oftc.net</code> is an alternative name in all server certificates, it's valid without certificate pinning on all servers.</p>
<p>This means that all connections will happen over Tor, so networks that don't have an onion service will use Tor exit nodes, which are blocked on many networks. The suggestion from <code>#znc</code> is to use two ZNC servers, one specifically for Tor connections. You may also be able to connect one to the other so that only one ZNC service need be exposed. I've been told implementing proxy support requires a big refactor of the ZNC networking code.</p>
<h2 id="fixing-restarts">Fixing restarts</h2>
<p>The unit contains the line <code>After=network.target</code>, which does not ensure the network is actually usable, and may result in the following error:</p>
<pre><code>Binding to port [6668] on host [127.0.0.1] using ipv4... [ Unable to bind: Invalid argument ]
</code></pre>
<p>To remedy this, replace <code>After=network.target</code> with <code>After=network-online.target</code> and <code>Wants=network-online.target</code>, see <a href="https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/">Running Services after the Network is up</a>.</p>
<p>If not using <code>proxychains</code>, you can add a <code>PidFile</code> to <code>/var/lib/znc/.znc/configs/znc.conf</code>,</p>
<pre><code>PidFile /var/lib/znc/.znc/znc.pid
</code></pre>
<p>Then remove the <code>-f</code> flag so that <code>znc</code> forks, and add <code>Type=forking</code> to the service file at <code>/usr/lib/systemd/system/znc.service</code>:</p>
<pre><code>[Unit]
Description=ZNC, an advanced IRC bouncer
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStart=/usr/bin/znc
User=znc
PIDFile=/var/lib/znc/.znc/znc.pid

[Install]
WantedBy=multi-user.target
</code></pre>
<p>which should enable <code>systemctl restart znc.service</code>.</p>
<h2 id="other-networks">Other Networks</h2>
<p>You can add networks via the web UI, or via <code>*status</code> with:</p>
<pre><code>/msg *status AddNetwork undernet
</code></pre>
<p>then connect to that network, and you can add servers:</p>
<pre><code>/msg *status AddServer irc.libera.chat +6697
</code></pre>
<p>you can also load modules at the network level:</p>
<pre><code>/msg *status LoadMod --type=network sasl
</code></pre>
<p>or</p>
<pre><code>/msg *status LoadMod --type=network nickserv
</code></pre>
<p>and configure the module via its user:</p>
<pre><code>/msg *sasl Mechanism PLAIN
</code></pre>
<p>or</p>
<pre><code>/msg *nickserv SetCommand IDENTIFY PRIVMSG NickServ :IDENTIFY {password}
</code></pre>
<h3 id="undernet">Undernet</h3>
<p>Undernet doesn't support TLS, CertFP, or even SASL. It doesn't allow registering nicks. Once you've signed up for an account, you need to provide</p>
<pre><code>+x! &lt;username&gt; &lt;password&gt;
</code></pre>
<p>in the networks's server password field, this is called <a href="https://www.undernet.org/docs/x-commands-english">Login on Connect</a>. The <code>+x!</code> will cloak your IP upon connection:</p>
<blockquote>
<p><code>+x!</code>: Only connect me when X is online and hide my IP address</p>
</blockquote>
<p>X is the Undernet Channel Services bot, which should always be online.</p>
<p>Undernet also doesn't support registering nicks, only usernames registered through the Undernet Channel Service, known as <a href="https://cservice.undernet.org/">CService</a>. The CService website supports 2FA via OTP codes, but it is required for IRC login as well if enabled which is not supported in ZNC, so I don't recommend enabling it.</p>
<p>Undernet blocks Tor exit node IPs, so it won't connect if using <code>proxychains</code>.</p>
<h3 id="ircnet">IRCnet</h3>
<p>IRCnet supports TLS but also doesn't support registering nicks, and doesn't support SASL or CertFP. If you sign up for a <a href="https://www.cloak.ircnet.io/">cloak</a>, you can then provide a server password to <code>ssl.cloak.ircnet.io</code>, but you must have a static IP address or CIDR to sign up for one.</p>
<p>IRCnet does provide an onion service at <code>IRCnet3mh2zfmpn3zcgwtrjnh37zcnyvjmsvoig577isjmy6m24auqqd.onion</code> on port 6667.</p>
<h3 id="irssi"><code>irssi</code></h3>
<p>Below is a snippet of my <code>~/.irssi/config</code> as an example of how to configure each network:</p>
<pre><code>servers = (
  {
    address = &quot;connor.zip&quot;;
    chatnet = &quot;libera-znc&quot;;
    port = &quot;6697&quot;;
    password = &quot;shared@work/libera:REDACTED&quot;;
    use_tls = &quot;yes&quot;;
    tls_verify = &quot;yes&quot;;
    autoconnect = &quot;yes&quot;;
  },
  {
    address = &quot;connor.zip&quot;;
    chatnet = &quot;slashnet-znc&quot;;
    port = &quot;6697&quot;;
    password = &quot;shared@work/slashnet:REDACTED&quot;;
    use_tls = &quot;yes&quot;;
    tls_verify = &quot;yes&quot;;
    autoconnect = &quot;yes&quot;;
  },
  {
    address = &quot;connor.zip&quot;;
    chatnet = &quot;oftc-znc&quot;;
    port = &quot;6697&quot;;
    password = &quot;shared@work/oftc:REDACTED&quot;;
    use_tls = &quot;yes&quot;;
    tls_verify = &quot;yes&quot;;
    autoconnect = &quot;yes&quot;;
  }
);

chatnets = {
  &quot;libera-znc&quot; = { type = &quot;IRC&quot;; };
  &quot;slashnet-znc&quot; = { type = &quot;IRC&quot;; };
  &quot;oftc-znc&quot; = { type = &quot;IRC&quot;; };
};
</code></pre>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>This <a href="https://rob-blackbourn.medium.com/how-to-use-cfssl-to-create-self-signed-certificates-d55f76ba5781">blog post</a> is a useful starting place for setting up a CA and Intermediate CA.</p>
<p>Add the root CA cert to macOS keychain, do:</p>
<pre><code class="language-sh">sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.pem
</code></pre>
&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-09-01-system-6-online</id>
    <title>System 6, Online</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2023-09-01-system-6-online" />
    <published>2023-09-01T00:00:00-05:00</published>
    <summary>Hooking a System 6 Macintosh SE up to the Internet</summary>
    
    <media:content url="https://connor.zip/resources/images/2023-09-01-system-6-online/macintosh-online.jpg" medium="image" width="800" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>After writing most of <a href="/posts/2023-08-04-localtalk-ethernet">Browsing like it's 1994</a>, I found another Macintosh SE running System 6.0.3.</p>
<p>System 6 ships with minimal AppleTalk support, and no built-in support for AppleShare, so it's more challenging to get online. There's useful info in this post on <a href="https://happymacs.wordpress.com/tag/mac-os-classic-networking/">Networking your System 6 Mac</a>.</p>
<p>My System 6 machine is in slightly better shape cosmetically than the other Macintosh SE, and it has a drive cover since it originally came with a hard drive. Both machines have faulty internal floppy drives, so using an external 800k drive from my Apple IIGS, I was able to copy some files from the share on the System 7 machine to disk. But, I couldn't experiment much because this drive also failed while using Disk Copy<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> to copy an image file (Apple 3.5 Drive A9M0106, with a Sony MFD-51W-03 drive inside).</p>
<p>I've learned there is a ROM revision that allows later Macintosh SEs to support 1.4M floppy drives, but I believe mine only support 800k drives, and that it <a href="https://retrotechnology.com/herbs_stuff/mac_800k_cables.txt">matters what cable you use</a> to connect what model of floppy drive when it’s used as an internal drive. I can only find Apple Network Software 1.4.5, which includes the updates to System 6 to support AppleShare, on a 1.4M floppy. Since compatible floppy drives are expensive, the best solution may be to find an external SC20 SCSI drive or a <a href="https://www.bigmessowires.com/floppy-emu/">FloppyEmu</a> or <a href="https://www.scsi2sd.com/index.php?title=SCSI2SD">SCSI2SD</a>.</p>
<p>Luckily, I was able to find a reasonably priced working Sony MP-F51W 800k floppy drive on eBay. I placed it in an external floppy drive enclosure and read a disk successfully. I also acquired some unformatted Sony MFD-2DD 800k floppy disks, since my only spare floppy had bad sectors, and used Disk Copy on the System 7 Macintosh to burn a copy of <a href="https://www.macintoshrepository.org/18036-appleshare-2-0-1">AppleShare 2.0.1 Workstation for the Plus, SE or Macintosh II</a>. To install AppleShare, I booted the System 6 Macintosh with the disk in the external drive. This boots to the copy of System 6.0.3 on the disk. I then chose the Workstation option on the install list, which installed successfully onto the local hard drive. I also discovered the option to boot into Multifinder instead of Finder when selecting the start-up drive.</p>
<p>Upon reboot, the System 6 Macintosh still doesn't recognize Netatalk's AppleShare server, but it <em>does</em> recognize the System 7 Macintosh's file server. Since the file server on the System 7 machine was installed by the previous owner, I'm not sure what revision it is.</p>
<p>Below is the flow to copy files through the system:</p>
<figure class="graphviz">
<svg width="287pt" height="298pt" viewBox="0.00 0.00 286.75 298.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 294)"><polygon fill="white" stroke="none" points="-4,4 -4,-294 282.75,-294 282.75,4 -4,4"/><g id="clust1" class="cluster"><title>cluster_netatalk</title><polygon fill="none" stroke="black" points="181.75,-186 181.75,-282 272.75,-282 272.75,-186 181.75,-186"/><text text-anchor="middle" x="227.25" y="-264.7" font-family="Times,serif" font-size="14.00">Netatalk 2.x</text></g><g id="clust2" class="cluster"><title>cluster_mac7</title><polygon fill="none" stroke="black" points="16.75,-82 16.75,-178 262.5,-178 262.5,-82 16.75,-82"/><text text-anchor="middle" x="139.62" y="-160.7" font-family="Times,serif" font-size="14.00">System 7 Macintosh SE</text>
</g>
<!-- mac -->
<g id="node1" class="node">
<title>mac</title>
<polygon fill="none" stroke="black" points="108,-248 13.5,-248 13.5,-194 108,-194 108,-248"/>
<text text-anchor="middle" x="60.75" y="-216.7" font-family="Times,serif" font-size="14.00">MacBook</text>
</g>
<!-- netatalk -->
<g id="node2" class="node">
<title>netatalk</title>
<polygon fill="none" stroke="black" points="262.75,-248 190.75,-248 190.75,-194 262.75,-194 262.75,-248"/>
<text text-anchor="middle" x="226.75" y="-216.7" font-family="Times,serif" font-size="14.00">Share</text>
</g>
<!-- mac&#45;&gt;netatalk -->
<g id="edge1" class="edge">
<title>mac&#45;&gt;netatalk</title>
<path fill="none" stroke="black" d="M108.31,-221C130.38,-221 156.7,-221 178.82,-221"/>
<polygon fill="black" stroke="black" points="178.78,-224.5 188.78,-221 178.78,-217.5 178.78,-224.5"/>
</g>
<!-- mac7_share -->
<g id="node3" class="node">
<title>mac7_share</title>
<polygon fill="none" stroke="black" points="96.75,-144 24.75,-144 24.75,-90 96.75,-90 96.75,-144"/>
<text text-anchor="middle" x="60.75" y="-112.7" font-family="Times,serif" font-size="14.00">Share</text>
</g>
<!-- mac7_share&#45;&gt;netatalk -->
<g id="edge4" class="edge">
<title>mac7_share&#45;&gt;netatalk</title>
<path fill="none" stroke="black" d="M96.93,-139.31C121.49,-154.89 154.48,-175.8 180.92,-192.57"/>
<polygon fill="black" stroke="black" points="178.75,-195.34 189.07,-197.74 182.49,-189.43 178.75,-195.34"/>
</g>
<!-- mac7_hd -->
<g id="node4" class="node">
<title>mac7_hd</title>
<polygon fill="none" stroke="black" points="254.5,-144 199,-144 199,-90 254.5,-90 254.5,-144"/>
<text text-anchor="middle" x="226.75" y="-112.7" font-family="Times,serif" font-size="14.00">HD</text>
</g>
<!-- mac7_share&#45;&gt;mac7_hd -->
<g id="edge2" class="edge">
<title>mac7_share&#45;&gt;mac7_hd</title>
<path fill="none" stroke="black" d="M96.93,-117C123.5,-117 159.92,-117 187.26,-117"/>
<polygon fill="black" stroke="black" points="187.22,-120.5 197.22,-117 187.22,-113.5 187.22,-120.5"/>
<text text-anchor="middle" x="155.62" y="-121.7" font-family="Times,serif" font-size="14.00">Copy</text>
</g>
<!-- mac6 -->
<g id="node5" class="node">
<title>mac6</title>
<polygon fill="none" stroke="black" points="121.5,-72 0,-72 0,0 121.5,0 121.5,-72"/>
<text text-anchor="middle" x="60.75" y="-40.7" font-family="Times,serif" font-size="14.00">System 6</text>
<text text-anchor="middle" x="60.75" y="-22.7" font-family="Times,serif" font-size="14.00">Macintosh SE</text>
</g>
<!-- mac6&#45;&gt;mac7_hd -->
<g id="edge3" class="edge">
<title>mac6&#45;&gt;mac7_hd</title>
<path fill="none" stroke="black" d="M121.89,-65.7C144.2,-76.72 168.85,-88.9 188.53,-98.62"/>
<polygon fill="black" stroke="black" points="186.75,-101.64 197.27,-102.93 189.85,-95.37 186.75,-101.64"/>
</g>
</g>
</svg>
</figure>
<p>Apple's <a href="https://developer.apple.com/library/archive/documentation/Networking/Conceptual/AFP/AFPVersionDifferences/AFPVersionDifferences.html">documentation</a> notes:</p>
<blockquote>
<p>AFP 2.0 is the version that was initially documented in Inside AppleTalk. The contents of Inside AppleTalk are now split between this document and <em>Apple Filing Protocol Reference</em>.</p>
<p>AFP 2.1 was a significant upgrade to accommodate System 7.0.</p>
</blockquote>
<p>Although Netatalk's <code>asip-status.pl</code> reports it is advertising AFP versions 1.1 through 3.3:</p>
<pre><code class="language-sh">$ asip-status.pl localhost
...
AFP versions: AFPVersion 1.1,AFPVersion 2.0,AFPVersion 2.1,AFP2.2,AFPX03,AFP3.1,AFP3.2,AFP3.3
</code></pre>
<h2 id="system-608">System 6.0.8</h2>
<p>As referenced in this <a href="https://68kmla.org/bb/index.php?threads/appleshare-and-system-6.8519/post-99417">forum post</a>, Apple provided a <a href="https://www.macintoshrepository.org/6877-apple-older-software-downloads-archive">downloads archive</a> which included a copy of Networks Software Installer 1.4.4, reportedly the latest version compatible with System 6 avialable on an 800k floppy. A copy is available in the <code>Networking_and_Communications_Software.zip</code> archive on that page. The installer is a Disk Copy 4.2 formatted image, but the README notes it needs System 6.0.5 or later.</p>
<figure>
<img src="/resources/images/2023-09-01-system-6-online/floppy-drive.jpg" alt="Apple 3.5 Drive" />
<figcaption>Apple 3.5 Drive</figcaption>
</figure>
<p>I took this opportunity to install System 6.0.8 from the second link, <a href="https://www.macintoshrepository.org/1778-mac-system-os-6-x-6-0-6-0-1-6-0-2-6-0-3-6-0-4-6-0-5-6-0-6-6-0-7-6-0-8-6-0-8l-">an archive of several System 6 versions</a>, by carting my external floppy drive back and forth between System 7 and System 6 machine to burn each of the four 800k disks. During 6.0.8 installation, I noticed it installed AppleTalk components, and I had read that System 6.0.8 contained the System 7 printing subsystem. Unfortunately, the Chooser behaved just as it did before, only recognizing the System 7 share.</p>
<p>Next I popped in the Network Software Installer 1.4.4 floppy I had burned earlier before realizing it wouldn't install on System 6.0.3. The installation went smoothly, and after a reboot, the Chooser now shows my Netatalk server and I am able to log in and mount the shares.</p>
<p>With the shares mounted, it was simple to copy MacTCP 2.0.6 over the network and into the System Folder and reboot once more. I then opened the Control Panel, chose MacTCP, and filled in the IP information from the <code>tinyMacIPgw</code> VM I set up in <a href="/posts/2023-08-04-localtalk-ethernet">Browsing like it's 1994</a>, but replaced the DNS address with my router.</p>
<figure>
<img src="/resources/images/2023-09-01-system-6-online/mactcp.jpg" alt="MacTCP Control Panel" />
<figcaption>MacTCP Control Panel</figcaption>
</figure>
<p>That was enough to let me start Wallop and join an IRC channel!</p>
<figure>
<img src="/resources/images/2023-09-01-system-6-online/macintosh-wallops.jpg" alt="Wallops on a Macintosh SE running System 6.0.8" />
<figcaption>Wallops on a Macintosh SE running System 6.0.8</figcaption>
</figure>
<h2 id="other-apps">Other Apps</h2>
<p>I was able to connect to a local Linux VM via Telnet, first to set up my local Telnet server:</p>
<pre><code class="language-sh">; sudo dnf install telnet telnet-server
; sudo systemctl start --enable telnet.socket
; sudo firewall-cmd --add-service=telnet --zone=home --permanent
; sudo firewall-cmd --reload
</code></pre>
<p>On my machine zone <code>home</code> is limited to my local network, more information in RedHat's documentation on <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sec-working_with_zones">Working with Zones</a>.</p>
<p>With that running, I was able to run <a href="https://www.macintoshrepository.org/83-ncsa-telnet-2-x">NCSA Telnet 2.7b5</a> and connect to it:</p>
<figure>
<img src="/resources/images/2023-09-01-system-6-online/ncsa-telnet-fullscreen.jpg" alt="NCSA Telnet" />
<figcaption>NCSA Telnet</figcaption>
</figure>
<p>At smaller screen sizes, it does overflow the scrollbars sometimes:</p>
<figure>
<img src="/resources/images/2023-09-01-system-6-online/ncsa-telnet-overflow.jpg" alt="NCSA Telnet with text overflowing" />
<figcaption>NCSA Telnet with text overflowing</figcaption>
</figure>
<p>I tested using MacWWW 1.0.3, which <a href="http://archive.retro.co.za/mirrors/68000/www.vintagemacworld.com/sys6net.html">purportedly</a> works on System 6 with no such luck. I see the loading page:</p>
<figure>
<img src="/resources/images/2023-09-01-system-6-online/macwww-start.jpg" alt="MacWWW 1.0.3 start screen" />
<figcaption>MacWWW 1.0.3 start screen</figcaption>
</figure>
<p>but after clicking, it crashes:</p>
<figure>
<img src="/resources/images/2023-09-01-system-6-online/macwww-crash.jpg" alt="MacWWW 1.0.3 crash" />
<figcaption>MacWWW 1.0.3 crash</figcaption>
</figure>
<p>I also tried <a href="https://www.macintoshrepository.org/368-bbedit-lite-3-x">BBEdit Lite 3.5.1</a>, which works great:</p>
<figure>
<img src="/resources/images/2023-09-01-system-6-online/bbedit.jpeg" alt="BBEdit Lite 3.5.1" />
<figcaption>BBEdit Lite 3.5.1</figcaption>
</figure>
<p><a href="http://www.barebones.com/">BBEdit</a> is made by Bare Bones Software, now on revision 14.6. I'd heard of it during my earliest years programming, but I didn't realize how long it had been around.</p>
<p>Next up I'll try to <a href="https://happymacs.wordpress.com/2023/07/30/networking-an-apple-iigs-with-localtalk/">network my Apple IIGS</a>.</p>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>Disk Copy is a utility that can copy files. The 4.2 version is the earliest that runs on System 6, and 5.5 is the latest. Both can run on System 7, but fail on Basilisk II.</p>
<p>Disk Copy 6.3 runs on System 7 and Basilisk II running System 7.5 without issue, but won't run on System 6. It can create compressed images, and has a more streamlined UI than 4.2 or 5.5. The images it creates can be double-clicked to mount them, which opens Disk Copy in the background. I haven't been able to create 4.2-compatible images using 6.3, and the images can't be opened by earlier versions.</p>
<p>My recommendation after using these is to stick to 4.2 on all systems where it will run, see the <a href="https://www.discferret.com/wiki/Apple_DiskCopy_4.2">DC42 format description</a>.</p>
<p>A neat snippet from the Wikipedia article on <a href="https://en.wikipedia.org/wiki/Disk_Copy">Disk Copy</a>:</p>
<blockquote>
<p>Disk Copy was also the name of an Apple utility distributed with some of the earliest versions of the classic Mac OS. In order to copy 400K floppy disks using as few disk swaps as possible on a machine with only 128K of RAM, the original Disk Copy used the screen buffer to store binary data from the disk being copied; as a result, the screen (other than a small area at the bottom displaying the GUI) filled with noise while copying was in progress. It was shipped with System 1.1 and System 2.0.</p>
</blockquote>
&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-08-21-macintosh-midi</id>
    <title>Macintosh MIDI</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2023-08-21-macintosh-midi" />
    <published>2023-08-21T00:00:00-05:00</published>
    <summary>Connecting a Macintosh SE to an Ensoniq ESQ-1</summary>
    
    <media:content url="https://connor.zip/resources/images/2023-08-21-macintosh-midi/macintosh-cubase-closeup.jpg" medium="image" width="800" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>A while back, I found an <a href="https://www.vintagesynth.com/ensoniq/ens_esq1.php">Ensoniq ESQ-1</a><sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> at Goodwill. Like most synths, it has MIDI input/output on the back. Unlike many modern synths, it has no integrated speakers or headphone jack. Instead it sports full size tape input/output jacks and a stereo audio out jacks. Once I got it home, I bought some RCA adapters and plugged it into my Yamaha AVR which serves as a preamp for a Schiit Vidar driving a pair of Magnepan LRS and a Rythmik L12 subwoofer. It sounded incredible, and the presets on the cartridge it came with were a lot of fun. Unfortunately, I don't know how to play so that was as far as I got.</p>
<p>Last week I was surfing eBay and came across the Apple MIDI Interface, which connects either the 8-pin DIN printer or modem port on an Apple IIGS or Macintosh to MIDI input/output. I found one for a reasonable price and ordered it.</p>
<figure>
<img src="/resources/images/2023-08-21-macintosh-midi/midi-interface.jpg" alt="Apple MIDI Interface" />
<figcaption>Apple MIDI Interface</figcaption>
</figure>
<p>Connecting the Apple MIDI Interface is simple enough, just connect one end of the 8-pin DIN cable into the printer port and the other to the MIDI Interface. Then connect one MIDI cable to the input port on the MIDI Interface, and the other end to the output port on your instrument, and vice versa for the other cable.</p>
<figure class="graphviz">
<svg width="612pt" height="264pt" viewBox="0.00 0.00 611.50 264.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 260)"><polygon fill="white" stroke="none" points="-4,4 -4,-260 607.5,-260 607.5,4 -4,4"/><g id="clust1" class="cluster"><title>cluster_macintosh</title><polygon fill="none" stroke="black" points="8,-8 8,-104 138.75,-104 138.75,-8 8,-8"/><text text-anchor="middle" x="73.38" y="-86.7" font-family="Times,serif" font-size="14.00">Macintosh SE</text></g><g id="clust2" class="cluster"><title>cluster_interface</title><polygon fill="none" stroke="black" points="257,-8 257,-248 383.25,-248 383.25,-8 257,-8"/><text text-anchor="middle" x="320.12" y="-230.7" font-family="Times,serif" font-size="14.00">MIDI Interface</text></g>
<g id="clust3" class="cluster">
<title>cluster_instrument</title>
<polygon fill="none" stroke="black" points="469.25,-80 469.25,-248 595.5,-248 595.5,-80 469.25,-80"/>
<text text-anchor="middle" x="532.38" y="-230.7" font-family="Times,serif" font-size="14.00">Ensoniq ESQ&#45;1</text>
</g>
<!-- modem -->
<g id="node1" class="node">
<title>modem</title>
<polygon fill="none" stroke="black" points="130.75,-70 16,-70 16,-16 130.75,-16 130.75,-70"/>
<text text-anchor="middle" x="73.38" y="-38.7" font-family="Times,serif" font-size="14.00">Modem Port</text>
</g>
<!-- port -->
<g id="node2" class="node">
<title>port</title>
<polygon fill="none" stroke="black" points="368.88,-70 271.38,-70 271.38,-16 368.88,-16 368.88,-70"/>
<text text-anchor="middle" x="320.12" y="-38.7" font-family="Times,serif" font-size="14.00">8&#45;pin DIN</text>
</g>
<!-- modem&#45;&gt;port -->
<g id="edge1" class="edge">
<title>modem&#45;&gt;port</title>
<path fill="none" stroke="black" d="M142.63,-43C179.45,-43 224.46,-43 259.8,-43"/>
<polygon fill="black" stroke="black" points="142.65,-39.5 132.65,-43 142.65,-46.5 142.65,-39.5"/>
<polygon fill="black" stroke="black" points="259.4,-46.5 269.4,-43 259.4,-39.5 259.4,-46.5"/>
<text text-anchor="middle" x="197.88" y="-47.7" font-family="Times,serif" font-size="14.00">8&#45;pin DIN cable</text>
</g>
<!-- int_in -->
<g id="node3" class="node">
<title>int_in</title>
<polygon fill="none" stroke="black" points="370.75,-142 269.5,-142 269.5,-88 370.75,-88 370.75,-142"/>
<text text-anchor="middle" x="320.12" y="-110.7" font-family="Times,serif" font-size="14.00">MIDI input</text>
</g>
<!-- int_out -->
<g id="node4" class="node">
<title>int_out</title>
<polygon fill="none" stroke="black" points="375.25,-214 265,-214 265,-160 375.25,-160 375.25,-214"/>
<text text-anchor="middle" x="320.12" y="-182.7" font-family="Times,serif" font-size="14.00">MIDI output</text>
</g>
<!-- esq_in -->
<g id="node5" class="node">
<title>esq_in</title>
<polygon fill="none" stroke="black" points="583,-214 481.75,-214 481.75,-160 583,-160 583,-214"/>
<text text-anchor="middle" x="532.38" y="-182.7" font-family="Times,serif" font-size="14.00">MIDI input</text>
</g>
<!-- int_out&#45;&gt;esq_in -->
<g id="edge3" class="edge">
<title>int_out&#45;&gt;esq_in</title>
<path fill="none" stroke="black" d="M375.54,-187C404.57,-187 440.37,-187 470.33,-187"/>
<polygon fill="black" stroke="black" points="469.99,-190.5 479.99,-187 469.99,-183.5 469.99,-190.5"/>
<text text-anchor="middle" x="426.25" y="-191.7" font-family="Times,serif" font-size="14.00">MIDI cable</text>
</g>
<!-- esq_out -->
<g id="node6" class="node">
<title>esq_out</title>
<polygon fill="none" stroke="black" points="587.5,-142 477.25,-142 477.25,-88 587.5,-88 587.5,-142"/>
<text text-anchor="middle" x="532.38" y="-110.7" font-family="Times,serif" font-size="14.00">MIDI output</text>
</g>
<!-- esq_out&#45;&gt;int_in -->
<g id="edge2" class="edge">
<title>esq_out&#45;&gt;int_in</title>
<path fill="none" stroke="black" d="M476.87,-115C447.93,-115 412.26,-115 382.37,-115"/>
<polygon fill="black" stroke="black" points="382.73,-111.5 372.73,-115 382.73,-118.5 382.73,-111.5"/>
<text text-anchor="middle" x="426.25" y="-119.7" font-family="Times,serif" font-size="14.00">MIDI cable</text>
</g>
</g>
</svg>
</figure>
<p>Or as illstrated on the back of the box:</p>
<figure>
<img src="/resources/images/2023-08-21-macintosh-midi/midi-interface-box.jpg" alt="Apple MIDI Interface Box" />
<figcaption>Apple MIDI Interface Box</figcaption>
</figure>
<p>Next we need some music software for interfacing with MIDI. Older versions of <a href="https://www.steinberg.net/cubase/">Cubase</a><sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> run on System 7, but certain versions (e.g. 2.5) can't be copied off the disk without triggering the copy protection, and the README for Cubase LITE 1.0.2 warns that later versions will be distributed on high-density floppies which can't be used from 800k systems like my Macintosh SE.</p>
<p>If you have Basilisk II and AppleShare set up as outlined in <a href="/posts/2023-08-04-localtalk-ethernet">this article</a>, follow these directions:</p>
<ul>
<li>Download this <a href="https://www.macintoshrepository.org/32694-cubase-lite-1-0-68k-">copy</a> of Cubase LITE 1.0.2.</li>
<li>Modern macOS will handle unpacking <code>.hqx</code> simply by opening the file. Under that is a <code>.sit</code> file which must be unpacked with StuffIt.</li>
<li>Copy the <code>.dsk.sit</code> file to the share folder of Basilisk II, start the emulator, then drag the file onto the StuffIt app to expand it. The expanded file is a <code>.dsk</code> (equivalent to <code>.image</code>).</li>
<li>Shut down the emulator. Open the Basilisk GUI and add a new disk, select the <code>.dsk</code> file from the share folder. Start the emulator.</li>
<li>Drag the disk onto the share folder, this will copy the disk files into a folder.</li>
<li>Copy the folder in the share from macOS to the mounted AppleShare share if your emulator share folder is not the same.</li>
<li>On the Macintosh SE, open the same AppleShare share.</li>
<li>Open the Cubase folder. Then open your boot disk, System Files, Fonts.</li>
<li>Copy the Cubase font file into the Fonts folder.</li>
<li>Copy the Cubase app onto the hard disk.</li>
<li>Open the Cubase app!</li>
</ul>
<figure>
<img src="/resources/images/2023-08-21-macintosh-midi/macintosh-cubase.jpg" alt="Macintosh SE running Cubase 1.0.2" />
<figcaption>Macintosh SE running Cubase 1.0.2</figcaption>
</figure>
<p>Cubase starts with MIDI settings which will work with the modem port. You can change it to use the printer port if you like, but I'm using that for LocalTalk. To create a new recording, click the record button at the bottom. Cubase will start clicking, and you can play some notes on your instrument. Once you're finished, click stop. The track will display, click on it to see the notes it recorded like in the image above.</p>
<p>From this screen, we can print our masterpiece. Copying the font file into the system Fonts folder is required for this to work, as outlined in the documentation. My system printer is already configured to my HP LaserJet 4100 which advertises itself over AppleTalk, and I can print with the default settings. It takes a couple of minutes, but eventually the printer will start humming and out will come a beautifully typeset sheet of music.</p>
<figure>
<img src="/resources/images/2023-08-21-macintosh-midi/cubase-print.jpg" alt="Printed Sheet Music from Cubase" />
<figcaption>Printed Sheet Music from Cubase</figcaption>
</figure>
<p>At this point, you're likely unimpressed with my key smashing. Luckily, there are actual musicians that have documented their use of Cubase 1.0 on the Macintosh:</p>
<ul>
<li>Look Mum No Computer's video: <a href="https://www.youtube.com/watch?v=7EF5RcIKqE8">Making Music on a Macintosh with Cubase 1.0</a></li>
<li>Be The Aeroplane's video: <a href="https://www.youtube.com/watch?v=XgSp3r3VB5s">Sequencing my studio from a Mac SE running Cubase</a></li>
</ul>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p><a href="https://en.wikipedia.org/wiki/Ensoniq_ESQ-1">Wikipedia</a> says:</p>
<blockquote>
<p>Ensoniq ESQ-1 is a 61-key, velocity sensitive, eight-note polyphonic and multitimbral synthesizer released by Ensoniq in 1985. It was marketed as a &quot;digital wave synthesizer&quot; but was an early Music Workstation.</p>
</blockquote>
&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
<li id="fn:2">
<p>A 1990 article in Sound On Sound, <a href="http://www.muzines.co.uk/articles/mac-attack/7301">Mac Attack!</a>, discusses Cubase's arrival on the Macintosh platform and illustrates some of its features.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-08-04-localtalk-ethernet</id>
    <title>Browsing like it's 1994</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2023-08-04-localtalk-ethernet" />
    <published>2023-08-13T00:00:00-05:00</published>
    <summary>Integrating a Macintosh SE, and an ImageWriter II dot matrix printer into a modern network: browsing the web, printing with AirPrint, and sharing files</summary>
    
    <media:content url="https://connor.zip/resources/images/2023-08-04-localtalk-ethernet/macintosh-online.jpg" medium="image" width="800" height="533"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Before the ubiquity of the Internet, before WiFi, even before Ethernet was affordable, there was the LocalTalk physical layer and cabling system and its companion suite of protocols called AppleTalk<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. A network ahead of its time in terms of plug-and-play<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>, but not quite as fast as 10mbit/s Ethernet at 230.4 kbit/s.</p>
<p>A few weeks ago, I found a Macintosh SE on Facebook Marketplace. It turned out to be running System 7.1, and had Microsoft Word 5 installed. Years prior, I had recapped an Apple IIGS and brought it back to life, and attempted to network it using LocalTalk and an ImageWriter II with a LocalTalk Option card, but was unsuccessful. With the Macintosh, I was finally able to use my ImageWriter II over AppleTalk!</p>
<p>Off to a good start, I wanted to expand my LocalTalk network. I swapped the ImageWriter II for an AsanteTalk and discovered that my HP LaserJet 4100N from 2004 with a 635n EIO networking card spoke EtherTalk and advertised itself as a LaserWriter. I was able to print the same document on the LaserJet, using the built-in PostScript driver for the LaserWriter -- the result was beautiful crisp text. In fact, there's another EIO card in the LaserJet; it provides USB connectivity alongside a LocalTalk port, so it could become part of my LocalTalk network as well.</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/hp-jetdirects.jpg" alt="HP JetDirect Cards" />
<figcaption>HP JetDirect Cards</figcaption>
</figure>
<p>Next, I ordered some LocalTalk adapters from eBay to convert my 8 pin DIN to 3 pin locking LocalTalk ports which work with LocalTalk cabling. Each adapter has two ports, which supports chaining devices together. Unfortunately, LocalTalk cabling is expensive (and PhoneNet was used more often at the time), so my LocalTalk network is limited to the Macintosh and AsanteTalk for the moment. With the AsanteTalk<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>, we open up the possibility of interfacing with a wider Ethernet and IP network.</p>
<p>As we enter the early 90s and the Internet becomes more widely available, these older Macintosh computers were used to access it. MacTCP and the AsanteTalk helped to enable this, and that's what this post is about.</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/asantetalk.jpg" alt="AsanteTalk" />
<figcaption>AsanteTalk</figcaption>
</figure>
<p>This post is broken down into several sections:</p>
<ul>
<li>The next section, <a href="#printing-over-localtalk">Printing over LocalTalk</a> covers the steps I used to configure my ImageWriter II and Macintosh SE to allow printing over LocalTalk.</li>
<li><a href="#netatalk-2x">Netatalk 2.x</a> covers integrating a Linux server with an Ethernet connection into an AppleTalk network.</li>
<li><a href="#printing">Printing</a> covers how to print to the ImageWriter II from a modern network, even from an iPhone via AirPrint.</li>
<li><a href="#adding-files">Adding Files</a> covers how to get files from the internet onto your Macintosh SE via AppleShare.</li>
<li><a href="#getting-online">Getting Online</a> covers using a period-correct browser, MacWeb 0.98, to browse the web.</li>
<li><a href="#system-6">System 6</a> is an adendum on work in-progress to connect a Macintosh SE running System 6 to the network.</li>
</ul>
<h2 id="printing-over-localtalk">Printing over LocalTalk</h2>
<p>Printing from a Macintosh SE to an ImageWriter is easy with the LocalTalk option card.</p>
<ul>
<li>
<p>Install the LocalTalk option card in the ImageWriter II:</p>
<ul>
<li>
<p>Lift off the lid, both clear plastic and tan pieces</p>
</li>
<li>
<p>Gently move the carriage to the extreme left</p>
</li>
<li>
<p>Remove the ribbon cartridge by lightly bending the two black tabs on either side and lifting, ensure the ribbon isn't stuck in the print head.</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/imagewriter-ii-lid-off.jpg" alt="ImageWriter II with lid and cartridge removed" />
<figcaption>ImageWriter II with lid and cartridge removed</figcaption>
</figure>
</li>
<li>
<p>Unscrew the two golden screws on round plastic wells on either side of the printer</p>
</li>
<li>
<p>Lift up and back at the top of the printer, being careful not to pull the cable that connects the top buttons</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/imagewriter-ii-top-off.jpg" alt="ImageWriter II with top removed" />
<figcaption>ImageWriter II with top removed</figcaption>
</figure>
</li>
<li>
<p>Place the option card atop the logic board.</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/imagewriter-ii-logic-board.jpg" alt="ImageWriter II Logic Board" />
<figcaption>ImageWriter II Logic Board</figcaption>
</figure>
</li>
<li>
<p>Press the plastic spacers into the holes in the logic board. Slide the ground cable onto the unused stub.</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/localtalk-option-card.jpg" alt="LocalTalk Option Card" />
<figcaption>LocalTalk Option Card</figcaption>
</figure>
</li>
<li>
<p>Ensure that <a href="https://www.nefec.org/upm/printers/mapiw2.htm">DIP switch</a> 4 on the second switch block is in the down position to enable the card</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/imagewriter-ii-switches.jpg" alt="ImageWriter II DIP Switches" />
<figcaption>ImageWriter II DIP Switches</figcaption>
</figure>
</li>
</ul>
</li>
<li>
<p>Connect the Macintosh SE's printer port to the ImageWriter II using either a 8-pin DIN printer cable, or by using LocalTalk adapters on each side and a 3-pin DIN LocalTalk cable, or by using a Farallon PhoneNet adapter and a phone line.</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/macintosh-printer-port.jpg" alt="Macintosh SE Printer Port" />
<figcaption>Macintosh SE Printer Port</figcaption>
</figure>
</li>
<li>
<p>In Chooser, select &quot;AppleTalk ImageWriter&quot; to set it as the default printer</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/macintosh-chooser.jpg" alt="Macintosh Chooser" />
<figcaption>Macintosh Chooser</figcaption>
</figure>
</li>
<li>
<p>Open a document in a word processing app like Word 5.1, and print a document.</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/macintosh-word.jpg" alt="Word 5.1" />
<figcaption>Word 5.1</figcaption>
</figure>
</li>
</ul>
<h2 id="netatalk-2x">Netatalk 2.x</h2>
<p><a href="https://github.com/Netatalk/netatalk">Netatalk</a> is the Linux implementation of several Apple protocols including AppleShare. Before 3.x, it supported AppleTalk, the protocol that Apple used before the switch to IP, importantly for us this is the protocol used over the LocalTalk and EtherTalk physical layers. There are several forks of Netatalk 2.x maintained by the retrocomputing community:</p>
<blockquote>
<p>In the 5 years since the release of Netatalk 2.2.6, an impressive number of forks and projects with their own downstream patchset to keep Netatalk running have emerged. Here are a few of the major ones that I encountered:</p>
<ul>
<li><a href="https://github.com/RasppleII/a2server">A2SERVER</a></li>
<li><a href="https://www.macip.net/">MacIP</a></li>
<li><a href="https://sheumann.github.io/AFPBridge/">AFPBridge</a></li>
<li><a href="http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/net/netatalk22/patches/?sortby=date#dirlist">NetBSD <code>netatalk22</code> package</a></li>
<li><a href="https://github.com/christopherkobayashi/netatalk-classic"><code>netatalk-classic</code></a> fork</li>
</ul>
</blockquote>
<p>Last year, Daniel Markstedt (handles <code>rdmark</code> or <code>slipperygrey</code>) released a new <a href="https://68kmla.org/bb/index.php?threads/yet-another-netatalk-2-2-fork.39889/">Netatalk 2.x fork</a> which can be compiled on modern Linux and includes systemd services. I'll be installing it on a Fedora Server VM running on ESXi.</p>
<h3 id="compile">Compile</h3>
<p>To get <a href="https://github.com/rdmark/netatalk-2.x"><code>netatalk-2.x</code></a> installed and serving AFP and AppleTalk, we need to compile it. First, we'll install some dependencies:</p>
<pre><code class="language-sh">; sudo dnf install openssl-devel libgcrypt-devel libdb-devel automake libtool avahi-devel cups-devel
</code></pre>
<table>
<thead>
<tr>
<th>Dependency</th>
<th>Feature</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>avahi-devel</code></td>
<td>Zeroconf (Bonjour) service discovery in Mac OS X 10.2 or later</td>
</tr>
<tr>
<td><code>cups-devel</code></td>
<td><code>papd</code> printer server support</td>
</tr>
<tr>
<td><code>libgcrypt-devel</code></td>
<td>DHX2 authentication support, required for Mac OS X 10.2 or later</td>
</tr>
</tbody>
</table>
<p>Then we'll need the <code>appletalk</code> kernel module for AppleTalk network support. On Fedora this is provided by <code>kernel-modules-extra</code>, but not on Fedora 35:</p>
<pre><code class="language-sh">; sudo dnf install kernel-modules-extra
</code></pre>
<p>On Fedora, the <code>appletalk</code> module is blacklisted. To allow it, edit the file <code>/etc/modprobe.d/appletalk-blacklist.conf</code> and comment out the last line:</p>
<pre><code># This kernel module can be automatically loaded by non-root users. To
# enhance system security, the module is blacklisted by default to ensure
# system administrators make the module available for use as needed.
# See https://access.redhat.com/articles/3760101 for more details.
#
# Remove the blacklist by adding a comment # at the start of the line.
#blacklist appletalk
</code></pre>
<p>Then have the module load automatically<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, we need to add the file <code>/etc/modules-load.d/appletalk.conf</code>:</p>
<pre><code># Load appletalk.ko at boot
appletalk
</code></pre>
<p>which configures the <code>systemd-modules-load.service</code> service. You can test it with:</p>
<pre><code class="language-sh">; sudo systemctl start systemd-modules-load.service
; journalctl -n 10 -u systemd-modules-load.service
Aug 05 01:25:44 misc.home.arpa systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Aug 05 01:25:44 misc.home.arpa systemd-modules-load[1433]: Inserted module 'appletalk'
...
</code></pre>
<p>Upon reboot, the module should be automatically loaded. To test the module is loaded:</p>
<pre><code class="language-sh">; lsmod | grep '^appletalk'
</code></pre>
<p>and to manually load the module:</p>
<pre><code class="language-sh">; sudo modprobe appletalk
</code></pre>
<p>To compile <code>netatalk-2.x</code>, first clone the repo:</p>
<pre><code class="language-sh">; git clone https://github.com/rdmark/netatalk-2.x.git
</code></pre>
<p>Then run the boostrap script<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>.</p>
<pre><code class="language-sh">; ./bootstrap
</code></pre>
<p>Now run the configure script, options are described in <a href="https://68kmla.org/bb/index.php?threads/yet-another-netatalk-2-2-fork.39889/">this post</a>.</p>
<pre><code class="language-sh">; ./configure --enable-systemd --enable-ddp --enable-a2boot --enable-cups --enable-timelord --enable-zeroconf --disable-quota --sysconfdir=/etc --with-uams-path=/usr/lib/netatalk
</code></pre>
<p>Finally, run <code>make</code> then <code>make install</code> as root.</p>
<pre><code class="language-sh">; make
; sudo make install
</code></pre>
<h3 id="configure">Configure</h3>
<p>Now you should have the systemd services in place for <code>atalkd.service</code> and <code>afpd.service</code> among others. First let's set up some minimal config files under <code>/etc/netatalk/</code>:</p>
<p>To configure <code>atalkd.conf</code>, you'll need the name of the interface that an AppleTalk network will be present on (a LAN):</p>
<pre><code class="language-sh">; ip addr
1: lo: ...
    ....
2: ens160: ...
    ....
...
</code></pre>
<p>In my case my VM's interface is named <code>ens160</code>, so my <code>/etc/netatalk/atalkd.conf</code> file ends with the line:</p>
<pre><code>ens160 -router -phase 2 -net 1 -addr 1.41 -zone &quot;office&quot;
</code></pre>
<p>Next is Apple Filing Protocol, which is configured in <code>/etc/netatalk/afpd.conf</code>:</p>
<pre><code>&quot;Office&quot; -transall -uamlist uams_guest.so,uams_clrtxt.so,uams_dhx2.so
</code></pre>
<p>I use <code>&quot;Office&quot;</code> instead of <code>-</code> because I want a friendly name instead of the VM hostname. <code>transall</code> enables both DSI (Data Stream Interface<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>) over TCP and DDP (Datagram Delivery Protocol<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>) aka EtherTalk, the AppleTalk data link layer on the Ethernet physical layer. The modules listed enable both DDP and DSI, the guest UAM for anonymous read-only access, the clrtxt UAM for Classic Mac OS authentication, and DHX2 UAM for Mac OS X / macOS authentication. The guest login only allows read-only access to shares, and System 7's AppleTalk interface in Chooser limits passwords to 8 characters. Netatalk authenticates against system users, so I created a new <code>macintosh</code> user with an 8-character password to allow logins.</p>
<pre><code>; sudo useradd macintosh
; sudo passwd macintosh
</code></pre>
<p>Next is the <code>AppleVolumes.default</code> file, which defines the volumes available to connecting systems. By default a user's home directory is exposed as a share with this line</p>
<pre><code>~
</code></pre>
<p>but we can also add other shares:</p>
<pre><code>/srv/appletalk &quot;Share&quot; options:prodos
</code></pre>
<p>This creates a share at <code>/srv/appletalk</code>, named <em>Share</em>, with the prodos option which allows the Apple IIGS to use the share or boot from it.</p>
<p>You can now enable the services:</p>
<pre><code class="language-sh">; sudo systemctl enable --now atalkd.service
; sudo systemctl enable --now afpd.service
</code></pre>
<h3 id="firewall">Firewall</h3>
<p>To ensure we can access these shares over TCP using <code>afpovertcp</code> from a modern mac, we need to open the firewall. I created a new service for the port and enabled it:</p>
<pre><code class="language-sh">; sudo firewall-cmd --permanent --new-service=afpovertcp
; sudo firewall-cmd --permanent --service=afpovertcp --add-port=548/tcp
; sudo firewall-cmd --permanent --add-service=afpovertcp
; sudo firewall-cmd --reload
</code></pre>
<h2 id="printing">Printing</h2>
<p>Netatalk also includes a Printer Access Protocol daemon called <code>papd</code> which integrates with CUPS and provides bidirectional printing support.</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/imagewriter-ii.jpg" alt="Apple ImageWriter II" />
<figcaption>Apple ImageWriter II</figcaption>
</figure>
<h3 id="macintosh-to-cups">Macintosh to CUPS</h3>
<p>Next we'll edit <code>/etc/netatalk/papd.conf</code> to expose our CUPS printers to the AppleTalk network, see these <a href="https://github.com/PiSCSI/piscsi/wiki/AFP-File-Sharing#sharing-a-modern-printer-over-appletalk">directions</a>:</p>
<pre><code>cupsautoadd:op=root:
</code></pre>
<p>The <a href="https://netatalk.sourceforge.io/2.0/htmldocs/papd.conf.5.html">documentation</a> tells us:</p>
<blockquote>
<p>If used as the first entry in papd.conf this will share all CUPS printers via papd. type/zone settings as well as other parameters assigned to this special printer share will apply to all CUPS printers. Unless the pd option is set, the CUPS PPDs will be used. To overwrite these global settings for individual printers simply add them subsequently to papd.conf and assign different settings.</p>
</blockquote>
<p>We should now enable the service</p>
<pre><code class="language-sh">; sudo systemctl enable --now papd.service
</code></pre>
<h3 id="cups-to-imagewriter-ii">CUPS to ImageWriter II</h3>
<p>To have CUPS print to an AppleTalk printer, we need a <code>pap</code> backend, see section 4 for <a href="https://www.emaculation.com/doku.php/appletalk_printserver_macos_and_osx#sectionedit19">directions on configuring a <code>pap</code> backend</a><sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup>. By default, the backend only looks for <code>LaserWriter</code> devices, edit <code>/usr/lib/cups/backend/pap</code> so that <code>devicetypes</code> reflects this or set it to <code>devicetypes=&quot;=&quot;</code> to find all devices.</p>
<pre><code>devicetypes=&quot;LaserWriter:ImageWriter&quot;
</code></pre>
<p>With the <code>pap</code> backend in place, we should see our printer here:</p>
<pre><code>lpinfo -v
...
network pap://office/HP%20LaserJet%204100%20Series/LaserWriter
network pap://office/ImageWriter/ImageWriter
</code></pre>
<p>If it doesn't show up, ensure your printer is shared over AppleTalk:</p>
<pre><code class="language-sh">; nbplkup
...
    AsantéTalk 94B02967:Asant�Talk                         1.111:252
            ImageWriter:ImageWriter                        1.113:138
</code></pre>
<p>We also need to update our <code>/etc/cups/cupsd.conf</code> file with:</p>
<pre><code>BrowseOrder allow,deny
BrowseAllow all
BrowseRemoteProtocols CUPS dnssd pap
BrowseAddress @LOCAL
BrowseLocalProtocols CUPS dnssd pap
</code></pre>
<p>and restart the services:</p>
<pre><code>; sudo systemctl restart cups.service
</code></pre>
<p>Now in the CUPS UI, under Administration &gt; Add a Printer, you should see the AppleTalk Devices via pap option. Select it and continue, then copy the URL from <code>lpinfo -v</code> or constructed from <code>nbplkup</code> info into the form, name the printer, and upload the <a href="https://www.openprinting.org/printer/Apple/Apple-ImageWriter_II">ImageWriter II PPD file</a>.</p>
<p>We need to patch<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup> our Netatalk 2.x distribution so that the status check doesn't error on ImageWriter IIs. Apply this <a href="/resources/patches/netatalk-2.x/0001-Fix-PAP-status-for-ImageWriter-II.patch">patch</a> to your <code>netatalk-2.x</code> directory:</p>
<pre><code class="language-sh">; curl -s https://connor.zip/resources/patches/netatalk-2.x/0001-Fix-PAP-status-for-ImageWriter-II.patch | git apply -
; make
; sudo make install
; sudo sytemctl restart papd.service
</code></pre>
<p>Now from the CUPS UI, you can print a test page and it should print from the ImageWriter II:</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/imagewriter-ii-cups-test-page.jpg" alt="ImageWriter II printing the CUPS test page" />
<figcaption>ImageWriter II printing the CUPS test page</figcaption>
</figure>
<p>By adding a Avahi service file, we can even print via AirPrint:</p>
<pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; ?&gt;
&lt;!DOCTYPE service-group  SYSTEM 'avahi-service.dtd'&gt;
&lt;service-group&gt;
	&lt;name&gt;ImageWriter II&lt;/name&gt;
	&lt;service&gt;
		&lt;type&gt;_ipp._tcp&lt;/type&gt;
		&lt;subtype&gt;_universal._sub._ipp._tcp&lt;/subtype&gt;
		&lt;port&gt;631&lt;/port&gt;
		&lt;txt-record&gt;txtvers=1&lt;/txt-record&gt;
		&lt;txt-record&gt;qtotal=1&lt;/txt-record&gt;
		&lt;txt-record&gt;UUID=EF910D03-69A2-44BC-B793-2966D282B0A4&lt;/txt-record&gt;
		&lt;txt-record&gt;Binary=T&lt;/txt-record&gt;
		&lt;txt-record&gt;TBCP=T&lt;/txt-record&gt;
		&lt;txt-record&gt;kind=document&lt;/txt-record&gt;
		&lt;txt-record&gt;URF=none&lt;/txt-record&gt;
		&lt;txt-record&gt;rp=printers/imagewriter&lt;/txt-record&gt;
		&lt;txt-record&gt;note=Office&lt;/txt-record&gt;
		&lt;txt-record&gt;product=(ImageWriter II)&lt;/txt-record&gt;
		&lt;txt-record&gt;pdl=application/octet-stream,application/pdf,application/postscript,application/vnd.cups-raster,image/gif,image/jpeg,image/png,image/tiff,image/urf,text/html,text/plain,application/vnd.adobe-reader-postscript,application/vnd.cups-pdf&lt;/txt-record&gt;
	&lt;/service&gt;
&lt;/service-group&gt;
</code></pre>
<p>The final flow is:</p>
<figure class="graphviz">
<svg width="1366pt" height="120pt" viewBox="0.00 0.00 1366.00 120.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 116)"><polygon fill="white" stroke="none" points="-4,4 -4,-116 1362,-116 1362,4 -4,4"/><g id="clust1" class="cluster"><title>cluster_vm</title><polygon fill="none" stroke="black" points="188.5,-8 188.5,-104 759.5,-104 759.5,-8 188.5,-8"/><text text-anchor="middle" x="474" y="-86.7" font-family="Times,serif" font-size="14.00">VM</text></g><g id="clust2" class="cluster"><title>cluster_iw</title><polygon fill="none" stroke="black" points="1042,-8 1042,-104 1350,-104 1350,-8 1042,-8"/><text text-anchor="middle" x="1196" y="-86.7" font-family="Times,serif" font-size="14.00">ImageWriter II</text></g><!-- iphone -->
<g id="node1" class="node">
<title>iphone</title>
<polygon fill="none" stroke="black" points="80.25,-70 0,-70 0,-16 80.25,-16 80.25,-70"/>
<text text-anchor="middle" x="40.12" y="-38.7" font-family="Times,serif" font-size="14.00">iPhone</text>
</g>
<!-- cups -->
<g id="node2" class="node">
<title>cups</title>
<polygon fill="none" stroke="black" points="267,-70 196.5,-70 196.5,-16 267,-16 267,-70"/>
<text text-anchor="middle" x="231.75" y="-38.7" font-family="Times,serif" font-size="14.00">CUPS</text>
</g>
<!-- iphone&#45;&gt;cups -->
<g id="edge3" class="edge">
<title>iphone&#45;&gt;cups</title>
<path fill="none" stroke="black" d="M80.42,-43C110.85,-43 153.07,-43 185.02,-43"/>
<polygon fill="black" stroke="black" points="184.77,-46.5 194.77,-43 184.77,-39.5 184.77,-46.5"/>
<text text-anchor="middle" x="138.38" y="-47.7" font-family="Times,serif" font-size="14.00">PDF over IPP</text>
</g>
<!-- driver -->
<g id="node3" class="node">
<title>driver</title>
<polygon fill="none" stroke="black" points="524.25,-70 343.5,-70 343.5,-16 524.25,-16 524.25,-70"/>
<text text-anchor="middle" x="433.88" y="-38.7" font-family="Times,serif" font-size="14.00">GhostScript iwhi driver</text>
</g>
<!-- cups&#45;&gt;driver -->
<g id="edge1" class="edge">
<title>cups&#45;&gt;driver</title>
<path fill="none" stroke="black" d="M267.15,-43C285.21,-43 308.37,-43 331.62,-43"/>
<polygon fill="black" stroke="black" points="331.5,-46.5 341.5,-43 331.5,-39.5 331.5,-46.5"/>
<text text-anchor="middle" x="305.25" y="-47.7" font-family="Times,serif" font-size="14.00">Filters</text>
</g>
<!-- pap -->
<g id="node4" class="node">
<title>pap</title>
<polygon fill="none" stroke="black" points="751.5,-70 633,-70 633,-16 751.5,-16 751.5,-70"/>
<text text-anchor="middle" x="692.25" y="-38.7" font-family="Times,serif" font-size="14.00">PAP Backend</text>
</g>
<!-- driver&#45;&gt;pap -->
<g id="edge2" class="edge">
<title>driver&#45;&gt;pap</title>
<path fill="none" stroke="black" d="M524.54,-43C556.3,-43 591.53,-43 621.39,-43"/>
<polygon fill="black" stroke="black" points="621.04,-46.5 631.04,-43 621.04,-39.5 621.04,-46.5"/>
<text text-anchor="middle" x="578.62" y="-47.7" font-family="Times,serif" font-size="14.00">Raster data</text>
</g>
<!-- asante -->
<g id="node5" class="node">
<title>asante</title>
<polygon fill="none" stroke="black" points="954.75,-70 848.25,-70 848.25,-16 954.75,-16 954.75,-70"/>
<text text-anchor="middle" x="901.5" y="-38.7" font-family="Times,serif" font-size="14.00">AsanteTalk</text>
</g>
<!-- pap&#45;&gt;asante -->
<g id="edge4" class="edge">
<title>pap&#45;&gt;asante</title>
<path fill="none" stroke="black" d="M763.07,-43C786.85,-43 813.29,-43 836.59,-43"/>
<polygon fill="black" stroke="black" points="763.32,-39.5 753.32,-43 763.32,-46.5 763.32,-39.5"/>
<polygon fill="black" stroke="black" points="836.58,-46.5 846.58,-43 836.58,-39.5 836.58,-46.5"/>
<text text-anchor="middle" x="799.88" y="-47.7" font-family="Times,serif" font-size="14.00">EtherTalk</text>
</g>
<!-- card -->
<g id="node6" class="node">
<title>card</title>
<polygon fill="none" stroke="black" points="1225.5,-70 1050,-70 1050,-16 1225.5,-16 1225.5,-70"/>
<text text-anchor="middle" x="1137.75" y="-38.7" font-family="Times,serif" font-size="14.00">LocalTalk Option Card</text>
</g>
<!-- asante&#45;&gt;card -->
<g id="edge6" class="edge">
<title>asante&#45;&gt;card</title>
<path fill="none" stroke="black" d="M966.55,-43C988.73,-43 1014.14,-43 1038.43,-43"/>
<polygon fill="black" stroke="black" points="966.58,-39.5 956.58,-43 966.58,-46.5 966.58,-39.5"/>
<polygon fill="black" stroke="black" points="1038.36,-46.5 1048.36,-43 1038.36,-39.5 1038.36,-46.5"/>
<text text-anchor="middle" x="1002.38" y="-47.7" font-family="Times,serif" font-size="14.00">LocalTalk</text>
</g>
<!-- printer -->
<g id="node7" class="node">
<title>printer</title>
<polygon fill="none" stroke="black" points="1342,-70 1262.5,-70 1262.5,-16 1342,-16 1342,-70"/>
<text text-anchor="middle" x="1302.25" y="-38.7" font-family="Times,serif" font-size="14.00">Printer</text>
</g>
<!-- card&#45;&gt;printer -->
<g id="edge5" class="edge">
<title>card&#45;&gt;printer</title>
<path fill="none" stroke="black" d="M1225.71,-43C1234.33,-43 1242.86,-43 1250.92,-43"/>
<polygon fill="black" stroke="black" points="1250.82,-46.5 1260.82,-43 1250.82,-39.5 1250.82,-46.5"/>
</g>
</g>
</svg>
</figure>
<h2 id="adding-files">Adding files</h2>
<p>In each share, Netatalk creats metadata stores. Files in the share only represent only <em>part</em> of a file, the metadata is maintained in these databaes. If you add a Macintosh file in Linux, or even if you copy an existing file to a new name, it'll show up to the Macintosh as an unknown file -- the metadata about how to open it has been lost. This presents quite a problem: without a Macintosh with the file, how can I add files to my share? I've come up with a couple of options:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/StuffIt">StuffIt</a> is a program for creating archive files. These file contain this special metadata and it's recreated when StuffIt Expander is run on a Macintosh on one of these files, so the files can be handled by other operating systems and file systems without losing it. It was a common way to provide Macintosh files over the internet at the time. If your Mac doesn't already have StuffIt installed, but has a working floppy disk drive, you may be able to <code>dd</code> an <code>.img</code> image file onto a floppy using a USB floppy drive. Macs use variable speed writes and so PCs can't read their floppys, but I believe the reverse is possible.</li>
<li><a href="https://basilisk.cebix.net/">Basilisk II</a> a Macintosh emulator which could emulate a client Mac and allow you to add files to the share. The latest version (as of writing) of <a href="https://www.emaculation.com/forum/viewtopic.php?f=6&amp;t=7361">Basilisk II with support for ARM-based Macs</a>, and the <a href="https://www.emaculation.com/forum/viewtopic.php?f=6&amp;t=10454">Basilisk II GUI</a>, follow this <a href="https://www.emaculation.com/doku.php/basiliskii_osx_setup">guide</a> and then <a href="https://www.emaculation.com/doku.php/appletalk_for_sheepshaver">this one</a> for AppleTalk. We can add an image file of a floppy containing StuffIt to install it, then we can copy that program onto the share.</li>
</ul>
<p>When using <code>slirp</code> with Basilisk II, the IP configuration (using OpenTransport) are <em>not</em> the settings from your network, but the network within Basilisk II:</p>
<table>
<thead>
<tr>
<th>Setting</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>IP Address</td>
<td>10.0.2.15</td>
</tr>
<tr>
<td>Subnet mask</td>
<td>255.255.255.0</td>
</tr>
<tr>
<td>Router address</td>
<td>10.0.2.2</td>
</tr>
<tr>
<td>Name server address</td>
<td>10.0.2.3</td>
</tr>
</tbody>
</table>
<p>I couldn't get <code>slirp</code> to work, but I discovered that the shared directory between macOS and System 7 on Basilisk II can hold the StuffIt files going into Basilisk II <em>and</em> the resulting decompressed files. And macOS can handle Macintosh files without disrupting the metadata. We can mount the Netatalk AFP server to our modern Mac via <code>afpovertcp</code>, then copy the un-stuffed program files from Basilisk II from the share folder to our AFP folder. Or, we can make the AFP share folder our Basilisk II share folder and skip the extra copy. I was able to copy the StuffIt program file this way from Basilisk II to my Macintosh SE.</p>
<figure class="graphviz">
<svg width="547pt" height="224pt" viewBox="0.00 0.00 546.75 224.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 220)"><polygon fill="white" stroke="none" points="-4,4 -4,-220 542.75,-220 542.75,4 -4,4"/><g id="clust1" class="cluster"><title>cluster_laptop</title><polygon fill="none" stroke="black" points="8,-112 8,-208 336.75,-208 336.75,-112 8,-112"/><text text-anchor="middle" x="172.38" y="-190.7" font-family="Times,serif" font-size="14.00">Laptop</text></g><g id="clust2" class="cluster"><title>cluster_mac</title><polygon fill="none" stroke="black" points="214.25,-8 214.25,-104 336.75,-104 336.75,-8 214.25,-8"/><text text-anchor="middle" x="275.5" y="-86.7" font-family="Times,serif" font-size="14.00">Macintosh SE</text></g>
<!-- basilisk -->
<g id="node1" class="node">
<title>basilisk</title>
<polygon fill="none" stroke="black" points="110.5,-174 16,-174 16,-120 110.5,-120 110.5,-174"/>
<text text-anchor="middle" x="63.25" y="-142.7" font-family="Times,serif" font-size="14.00">Basilisk II</text>
</g>
<!-- tcpshare -->
<g id="node2" class="node">
<title>tcpshare</title>
<polygon fill="none" stroke="black" points="328.75,-174 222.25,-174 222.25,-120 328.75,-120 328.75,-174"/>
<text text-anchor="middle" x="275.5" y="-142.7" font-family="Times,serif" font-size="14.00">AFP mount</text>
</g>
<!-- basilisk&#45;&gt;tcpshare -->
<g id="edge1" class="edge">
<title>basilisk&#45;&gt;tcpshare</title>
<path fill="none" stroke="black" d="M110.91,-147C140.35,-147 178.75,-147 210.95,-147"/>
<polygon fill="black" stroke="black" points="210.54,-150.5 220.54,-147 210.54,-143.5 210.54,-150.5"/>
<text text-anchor="middle" x="166.38" y="-151.7" font-family="Times,serif" font-size="14.00">share folder</text>
</g>
<!-- netatalk -->
<g id="node4" class="node">
<title>netatalk</title>
<polygon fill="none" stroke="black" points="538.75,-129 427.75,-129 427.75,-75 538.75,-75 538.75,-129"/>
<text text-anchor="middle" x="483.25" y="-97.7" font-family="Times,serif" font-size="14.00">Netatalk 2.x</text>
</g>
<!-- tcpshare&#45;&gt;netatalk -->
<g id="edge2" class="edge">
<title>tcpshare&#45;&gt;netatalk</title>
<path fill="none" stroke="black" d="M329.22,-135.47C355.81,-129.65 388.33,-122.54 416.53,-116.37"/>
<polygon fill="black" stroke="black" points="417.01,-119.85 426.04,-114.29 415.52,-113.01 417.01,-119.85"/>
<text text-anchor="middle" x="378.25" y="-135.86" font-family="Times,serif" font-size="14.00">TCP</text>
</g>
<!-- share -->
<g id="node3" class="node">
<title>share</title>
<polygon fill="none" stroke="black" points="328.75,-70 222.25,-70 222.25,-16 328.75,-16 328.75,-70"/>
<text text-anchor="middle" x="275.5" y="-38.7" font-family="Times,serif" font-size="14.00">AFP mount</text>
</g>
<!-- share&#45;&gt;netatalk -->
<g id="edge3" class="edge">
<title>share&#45;&gt;netatalk</title>
<path fill="none" stroke="black" d="M329.22,-58.12C355.81,-65.74 388.33,-75.07 416.53,-83.16"/>
<polygon fill="black" stroke="black" points="415.48,-86.49 426.06,-85.89 417.41,-79.77 415.48,-86.49"/>
<text text-anchor="middle" x="378.25" y="-84.22" font-family="Times,serif" font-size="14.00">AppleTalk</text>
</g>
</g>
</svg>
</figure>
<p>There are several file formats used for old files:</p>
<ul>
<li><code>.sit</code> or <code>.sea</code> are archives created by StuffIt, StuffIt 4.x runs on a Macintosh but can't open archives from newer versions.</li>
<li><code>.bin</code> and <code>.hqx</code> which are extractable with Archive Utility on a modern macOS.</li>
</ul>
<p>Sometimes an <code>.img</code> file may be StuffIt compressed, to deal with that:</p>
<ul>
<li>Copy the file into the share folder for the emulator</li>
<li>On the emulator, StuffIt expand the file by dragging it onto the StuffIt Expander app</li>
<li>Shut down the emulator</li>
<li>In the Basilisk II GUI, add as a disk the expanded <code>.img</code> file in the share folder</li>
<li>Start the emulator, the files are available on the mounted disk iamge</li>
</ul>
<p>Here is an <a href="https://system7today.com/sys71-on-68k/">update list</a>, and here's <a href="https://erichelgeson.github.io/blog/2021/03/23/ultimate-system-7.1/">another</a> for System 7.1.</p>
<h2 id="getting-online">Getting Online</h2>
<h3 id="mac-ip-gateway">Mac IP Gateway</h3>
<p>To get our Macintosh online, we need either OpenTransport<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup> (for newer versions of System 7) or MacTCP. They both work by proxying IP packets over AppleTalk where a gateway, (originally a newer Mac running Apple IP Gateway<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup>) translates them to IP on Ethernet.</p>
<p>Using <a href="https://github.com/jasonking3/macipgw"><code>macipgw</code></a>, which was originally written for <a href="https://macipgw.sourceforge.io/">FreeBSD</a> but has now been ported to Linux, we can provide this gateway. The AppleTalk packets themselves are copied from LocalTalk to Ethernet by an AsanteTalk, and since EtherTalk is routed over Ethernet and not IP, any system or VM running this software or Netatalk 2.x must have a physical Ethernet connection. Unfortunately, <code>macipgw</code> requires a kernel with <code>CONFIG_IPDDP</code> disabled:</p>
<blockquote>
<p>Your kernel must be configured with the CONFIG_IPDDP option disabled completely. It is not sufficient to compile it as a module -- in order to support the module, the kernel is modified to intercept all MacIP traffic, so userspace applications such as macipgw cannot handle it.</p>
</blockquote>
<p>And my Federa 37 kernel on the VM where I run Netatalk has it configured as a module:</p>
<pre><code class="language-sh">; cat /boot/config-$(uname -r) | grep CONFIG_IPDDP
CONFIG_IPDDP=m
CONFIG_IPDDP_ENCAP=y
</code></pre>
<p>There is a ready made ISO image from <a href="https://www.macip.net/tinymacipgw-iso/">macip.net</a> based on Tiny Core Linux, which I can run on ESXi with 512MB of memory and no hard disk (it's a live CD).</p>
<h3 id="macweb">MacWeb</h3>
<p>Using MacWeb 0.98 and MacTCP configured with the IPs provided by <code>tinymacipgw</code>, I was able to access the local network and load this blog's index page from the Kubernetes cluster in my office closet. I attempted to use Netscape Navigator and iCab based on this <a href="https://system7today.com/otherbrowsers">list of System 7 browsers</a> to no avail, Netscape Navigator crashed and iCab reported that it didn't have enough memory (the Macintosh has 4MB of RAM which is the maximum configurable).</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/macintosh-online-wide.jpg" alt="MacWeb 0.98 on a Macintosh SE running System 7.1" />
<figcaption>MacWeb 0.98 on a Macintosh SE running System 7.1</figcaption>
</figure>
<p>Below is a diagram of the path from the Mac to the internet:</p>
<figure class="graphviz">
<svg width="997pt" height="120pt" viewBox="0.00 0.00 997.29 120.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 116)"><polygon fill="white" stroke="none" points="-4,4 -4,-116 993.29,-116 993.29,4 -4,4"/><g id="clust1" class="cluster"><title>cluster_mac</title><polygon fill="none" stroke="black" points="8,-8 8,-104 271.75,-104 271.75,-8 8,-8"/><text text-anchor="middle" x="139.88" y="-86.7" font-family="Times,serif" font-size="14.00">Macintosh SE</text></g><g id="clust2" class="cluster"><title>cluster_vms</title><polygon fill="none" stroke="black" points="554.25,-8 554.25,-104 837.25,-104 837.25,-8 554.25,-8"/><text text-anchor="middle" x="695.75" y="-86.7" font-family="Times,serif" font-size="14.00">ESXi</text></g><!-- macweb -->
<g id="node1" class="node">
<title>macweb</title>
<polygon fill="none" stroke="black" points="139,-70 16,-70 16,-16 139,-16 139,-70"/>
<text text-anchor="middle" x="77.5" y="-38.7" font-family="Times,serif" font-size="14.00">MacWeb 0.98</text>
</g>
<!-- mactcp -->
<g id="node2" class="node">
<title>mactcp</title>
<polygon fill="none" stroke="black" points="263.75,-70 176,-70 176,-16 263.75,-16 263.75,-70"/>
<text text-anchor="middle" x="219.88" y="-38.7" font-family="Times,serif" font-size="14.00">MacTCP</text>
</g>
<!-- macweb&#45;&gt;mactcp -->
<g id="edge1" class="edge">
<title>macweb&#45;&gt;mactcp</title>
<path fill="none" stroke="black" d="M139.25,-43C147.61,-43 156.16,-43 164.39,-43"/>
<polygon fill="black" stroke="black" points="164.27,-46.5 174.27,-43 164.27,-39.5 164.27,-46.5"/>
</g>
<!-- asante -->
<g id="node3" class="node">
<title>asante</title>
<polygon fill="none" stroke="black" points="465.5,-70 359,-70 359,-16 465.5,-16 465.5,-70"/>
<text text-anchor="middle" x="412.25" y="-38.7" font-family="Times,serif" font-size="14.00">AsanteTalk</text>
</g>
<!-- mactcp&#45;&gt;asante -->
<g id="edge2" class="edge">
<title>mactcp&#45;&gt;asante</title>
<path fill="none" stroke="black" d="M264.07,-43C288.65,-43 319.89,-43 347.26,-43"/>
<polygon fill="black" stroke="black" points="347.21,-46.5 357.21,-43 347.21,-39.5 347.21,-46.5"/>
<text text-anchor="middle" x="311.38" y="-47.7" font-family="Times,serif" font-size="14.00">LocalTalk</text>
</g>
<!-- macip -->
<g id="node4" class="node">
<title>macip</title>
<polygon fill="none" stroke="black" points="695,-70 562.25,-70 562.25,-16 695,-16 695,-70"/>
<text text-anchor="middle" x="628.62" y="-38.7" font-family="Times,serif" font-size="14.00">MacIP Gateway</text>
</g>
<!-- asante&#45;&gt;macip -->
<g id="edge4" class="edge">
<title>asante&#45;&gt;macip</title>
<path fill="none" stroke="black" d="M465.97,-43C491.46,-43 522.56,-43 550.65,-43"/>
<polygon fill="black" stroke="black" points="550.55,-46.5 560.55,-43 550.55,-39.5 550.55,-46.5"/>
<text text-anchor="middle" x="513.88" y="-47.7" font-family="Times,serif" font-size="14.00">EtherTalk</text>
</g>
<!-- router -->
<g id="node5" class="node">
<title>router</title>
<polygon fill="none" stroke="black" points="829.25,-70 743,-70 743,-16 829.25,-16 829.25,-70"/>
<text text-anchor="middle" x="786.12" y="-38.7" font-family="Times,serif" font-size="14.00">pfSense</text>
</g>
<!-- macip&#45;&gt;router -->
<g id="edge3" class="edge">
<title>macip&#45;&gt;router</title>
<path fill="none" stroke="black" d="M695.12,-43C707.2,-43 719.68,-43 731.36,-43"/>
<polygon fill="black" stroke="black" points="731.22,-46.5 741.22,-43 731.22,-39.5 731.22,-46.5"/>
<text text-anchor="middle" x="719" y="-47.7" font-family="Times,serif" font-size="14.00">IP</text>
</g>
<!-- internet -->
<g id="node6" class="node">
<title>internet</title>
<ellipse fill="none" stroke="black" cx="927.77" cy="-43" rx="61.52" ry="38.18"/>
<text text-anchor="middle" x="927.77" y="-38.7" font-family="Times,serif" font-size="14.00">Internet</text>
</g>
<!-- router&#45;&gt;internet -->
<g id="edge5" class="edge">
<title>router&#45;&gt;internet</title>
<path fill="none" stroke="black" d="M829.4,-43C837.38,-43 845.94,-43 854.54,-43"/>
<polygon fill="black" stroke="black" points="854.32,-46.5 864.32,-43 854.32,-39.5 854.32,-46.5"/>
</g>
</g>
</svg>
</figure>
<p>Here's what an HTTP request from MacWeb looks like:</p>
<pre><code class="language-sh">; nc -vv -l 0.0.0.0 -p 8000
Connection from 10.0.2.250:1698
GET / HTTP/1.0
Accept: application/mac-binhex40 q=0.500
Accept: audio/basic q=0.500
Accept: image/gif q=0.500
Accept: image/jpeg q=0.500
Accept: image/pict q=0.500
Accept: image/x-xbitmap q=0.500
Accept: video/mpeg q=0.500
Accept: video/quicktime q=0.500
Accept: www/source q=0.300
Accept: www/unknown q=0.300
Accept: application/octet-stream q=0.100
Accept: text/plain
Accept: text/html
User-Agent:  MacWeb/libwww/2.13  libwww/unknown
</code></pre>
<p>The <code>Accept</code> header syntax has some quirks when compared to the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept">standard</a>. Instead of separating the options with <code>,</code> on a single line, each option gets its own line; and instead of separating each option and its <code>q=</code> with a <code>;</code>, there is a space. The browser also places the <em>most</em> preferred format at the end, instead of at the beginning as would be expected within a single line to distinguish between multiple values with no <code>q</code>. And, it has no support for <code>application/xhtml+xml</code>, a <a href="https://www.iana.org/assignments/media-types/application/xhtml+xml">MIME type</a> registered in 2002 well after its release. After some adjustments, this website is now viewable on MacWeb 0.98, although all pages except the index seem to hang (I assume because they're too large).</p>
<p>There are also some interesting MIME types in the request, like <a href="https://datatracker.ietf.org/doc/html/rfc1741">BinHex</a> for applications or <a href="https://www.iana.org/assignments/media-types/audio/basic"><code>audio/basic</code></a> for audio (the digital audio encoding introduced by the telephone system)</p>
<blockquote>
<p>The content of the &quot;audio/basic&quot; subtype is single channel audio encoded using 8bit ISDN mu-law [PCM] at a sample rate of 8000 Hz.</p>
</blockquote>
<p>It also advertises <code>image/pict</code> for the <a href="https://www.prepressure.com/library/file-formats/pict">PICT</a> graphics format:</p>
<blockquote>
<p>PICT is a file format that was developed by Apple Computer in 1984 as the native format for Macintosh graphics. PICT files are encoded in QuickDraw commands. The PICT file format is a meta-format that can be used for both bitmap images and vector images.</p>
</blockquote>
<p>There's also <code>www/source</code> and <code>www/unknown</code>, an artifact of its use of <a href="https://www.w3.org/Library/"><code>libwww</code></a>, now available on <a href="https://github.com/w3c/libwww">GitHub</a>. I found some information in the MIT WWW Library <a href="http://web.mit.edu/wwwdev/src/WWW/Library/Implementation/HTFormat.html"><code>HTFormat</code> docs</a>. It seems to predate MIME:</p>
<blockquote>
<p>The <code>www/xxx</code> ones are of course not MIME standard.</p>
<p>star/star is an output format which leaves the input untouched. It is useful for diagnostics, and for users who want to see the original, whatever it is.</p>
<pre><code>#define WWW_SOURCE	HTAtom_for(&quot;*/*&quot;)      /* Whatever it was originally */
</code></pre>
<p><code>www/present</code> represents the user's perception of the document. If you convert to www/present, you present the material to the user.</p>
<pre><code>#define WWW_PRESENT	HTAtom_for(&quot;www/present&quot;)   /* The user's perception */
</code></pre>
<p>The <code>message/rfc822</code> format means a MIME message or a plain text message with no MIME header. This is what is returned by an HTTP server.</p>
<pre><code>#define WWW_MIME	HTAtom_for(&quot;www/mime&quot;)		   /* A MIME message */
</code></pre>
<p><code>www/print</code> is like <code>www/present</code> except it represents a printed copy.</p>
<pre><code>#define WWW_PRINT	HTAtom_for(&quot;www/print&quot;)		   /* A printed copy */
</code></pre>
<p><code>www/unknown</code> is a really unknown type. Some default action is appropriate.</p>
<pre><code>#define WWW_UNKNOWN     HTAtom_for(&quot;www/unknown&quot;)
</code></pre>
</blockquote>
<h2 id="ncsa-mosaic">NCSA Mosaic</h2>
<p>Based on a <a href="https://news.ycombinator.com/item?id=37547186">comment</a> on Hacker News, I also gave NCSA Mosaic 1.0.3 a try using this <a href="https://www.macintoshrepository.org/560-ncsa-mosaic-browser-">copy</a>. It works! Mosiac 1.x is the last to work on the Macintosh SE, since Mosaic 2.0.1 asks for 5MB of memory.</p>
<figure>
<img src="/resources/images/2023-08-04-localtalk-ethernet/macintosh-mosaic.jpg" alt="NCSA Mosiac 1.0.3 on a Macintosh SE running System 7.1" />
<figcaption>NCSA Mosiac 1.0.3 on a Macintosh SE running System 7.1</figcaption>
</figure>
<p>The historical relevance of Mosaic can't be understated. Marc Andreessen led the NCSA Mosaic project, and went on the found Netscape, eventually co-founding the VC firm Andreessen Horowitz. Netscape went on to become Mozilla Firefox, one of today's major browsers. In 1995, Internet Explorer had its start when Microsoft licensed Spyglass Mosaic (which shared no code with NCSA Mosaic but licensed the name).</p>
<figure class="graphviz">
<svg width="512pt" height="206pt" viewBox="0.00 0.00 512.00 206.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 202)"><polygon fill="white" stroke="none" points="-4,4 -4,-202 508,-202 508,4 -4,4"/><!-- ncsa --><g id="node1" class="node"><title>ncsa</title><polygon fill="none" stroke="black" points="120,-90 0,-90 0,-36 120,-36 120,-90"/><text text-anchor="middle" x="60" y="-58.7" font-family="Times,serif" font-size="14.00">NCSA Mosaic</text></g><!-- netscape --><g id="node2" class="node"><title>netscape</title><polygon fill="none" stroke="black" points="273.75,-126 177.75,-126 177.75,-72 273.75,-72 273.75,-126"/><text text-anchor="middle" x="225.75" y="-94.7" font-family="Times,serif" font-size="14.00">Netscape</text></g>
<!-- ncsa&#45;&gt;netscape -->
<g id="edge3" class="edge">
<title>ncsa&#45;&gt;netscape</title>
<path fill="none" stroke="black" d="M120.12,-76C135.08,-79.29 151.15,-82.82 166.09,-86.1"/>
<polygon fill="black" stroke="black" points="165.27,-89.51 175.78,-88.24 166.77,-82.67 165.27,-89.51"/>
</g>
<!-- spyglass -->
<g id="node5" class="node">
<title>spyglass</title>
<polygon fill="none" stroke="black" points="295.5,-54 156,-54 156,0 295.5,0 295.5,-54"/>
<text text-anchor="middle" x="225.75" y="-22.7" font-family="Times,serif" font-size="14.00">Spyglass Mosaic</text>
</g>
<!-- ncsa&#45;&gt;spyglass -->
<g id="edge4" class="edge">
<title>ncsa&#45;&gt;spyglass</title>
<path fill="none" stroke="black" d="M120.12,-50C128.05,-48.26 136.3,-46.45 144.52,-44.64"/>
<polygon fill="black" stroke="black" points="145.2,-48.07 154.21,-42.51 143.69,-41.24 145.2,-48.07"/>
</g>
<!-- firefox -->
<g id="node3" class="node">
<title>firefox</title>
<polygon fill="none" stroke="black" points="482.25,-198 353.25,-198 353.25,-144 482.25,-144 482.25,-198"/>
<text text-anchor="middle" x="417.75" y="-166.7" font-family="Times,serif" font-size="14.00">Mozilla Firefox</text>
</g>
<!-- netscape&#45;&gt;firefox -->
<g id="edge1" class="edge">
<title>netscape&#45;&gt;firefox</title>
<path fill="none" stroke="black" d="M274.2,-116.98C294.8,-124.79 319.4,-134.11 342.33,-142.8"/>
<polygon fill="black" stroke="black" points="341.06,-146.06 351.66,-146.33 343.54,-139.52 341.06,-146.06"/>
</g>
<!-- ah -->
<g id="node4" class="node">
<title>ah</title>
<polygon fill="none" stroke="black" points="504,-126 331.5,-126 331.5,-72 504,-72 504,-126"/>
<text text-anchor="middle" x="417.75" y="-94.7" font-family="Times,serif" font-size="14.00">Andreessen Horowitz</text>
</g>
<!-- netscape&#45;&gt;ah -->
<g id="edge2" class="edge">
<title>netscape&#45;&gt;ah</title>
<path fill="none" stroke="black" d="M274.2,-99C288.09,-99 303.8,-99 319.6,-99"/>
<polygon fill="black" stroke="black" points="319.57,-102.5 329.57,-99 319.57,-95.5 319.57,-102.5"/>
</g>
<!-- ie -->
<g id="node6" class="node">
<title>ie</title>
<polygon fill="none" stroke="black" points="489.38,-54 346.12,-54 346.12,0 489.38,0 489.38,-54"/>
<text text-anchor="middle" x="417.75" y="-22.7" font-family="Times,serif" font-size="14.00">Internet Explorer</text>
</g>
<!-- spyglass&#45;&gt;ie -->
<g id="edge5" class="edge">
<title>spyglass&#45;&gt;ie</title>
<path fill="none" stroke="black" d="M295.88,-27C308.44,-27 321.65,-27 334.54,-27"/>
<polygon fill="black" stroke="black" points="334.24,-30.5 344.24,-27 334.24,-23.5 334.24,-30.5"/>
</g>
</g>
</svg>
</figure>
<p>Mosaic's creation was enabled by Al Gore's bill <em>High Performance Computing and Communication Act of 1991</em>, Andreessen had this to say:</p>
<blockquote>
<p>If it had been left to private industry, it wouldn't have happened. At least, not until years later.</p>
</blockquote>
<p>It accounts for the origins (at least in name) of two major browsers, of which only Firefox is still relevant today since the switch from IE to Edge. As for the other two major browsers: Safari was originally derived from the KDE project's Konqueror browser engine, KHTML (and JavaScript runtime KJS); the WebCore component of Safari's browser engine, WebKit, was forked into Blink, Chrome's browser engine. Chrome's open source project, Chromium, is the basis for several other browsers including Brave, Vivaldi, and Opera.</p>
<figure class="graphviz">
<svg width="456pt" height="278pt" viewBox="0.00 0.00 455.75 278.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 274)"><polygon fill="white" stroke="none" points="-4,4 -4,-274 451.75,-274 451.75,4 -4,4"/><!-- khtml --><g id="node1" class="node"><title>khtml</title><polygon fill="none" stroke="black" points="102.75,-162 0,-162 0,-108 102.75,-108 102.75,-162"/><text text-anchor="middle" x="51.38" y="-130.7" font-family="Times,serif" font-size="14.00">Konqueror</text></g><!-- safari --><g id="node2" class="node"><title>safari</title><polygon fill="none" stroke="black" points="210.75,-162 138.75,-162 138.75,-108 210.75,-108 210.75,-162"/><text text-anchor="middle" x="174.75" y="-130.7" font-family="Times,serif" font-size="14.00">Safari</text>
</g>
<!-- khtml&#45;&gt;safari -->
<g id="edge1" class="edge">
<title>khtml&#45;&gt;safari</title>
<path fill="none" stroke="black" d="M103.15,-135C111,-135 119.08,-135 126.84,-135"/>
<polygon fill="black" stroke="black" points="126.79,-138.5 136.79,-135 126.79,-131.5 126.79,-138.5"/>
</g>
<!-- chrome -->
<g id="node3" class="node">
<title>chrome</title>
<polygon fill="none" stroke="black" points="333.75,-162 246.75,-162 246.75,-108 333.75,-108 333.75,-162"/>
<text text-anchor="middle" x="290.25" y="-130.7" font-family="Times,serif" font-size="14.00">Chrome</text>
</g>
<!-- safari&#45;&gt;chrome -->
<g id="edge2" class="edge">
<title>safari&#45;&gt;chrome</title>
<path fill="none" stroke="black" d="M211.04,-135C218.65,-135 226.86,-135 235,-135"/>
<polygon fill="black" stroke="black" points="234.88,-138.5 244.88,-135 234.88,-131.5 234.88,-138.5"/>
</g>
<!-- ie -->
<g id="node4" class="node">
<title>ie</title>
<polygon fill="none" stroke="black" points="442.88,-270 374.62,-270 374.62,-216 442.88,-216 442.88,-270"/>
<text text-anchor="middle" x="408.75" y="-238.7" font-family="Times,serif" font-size="14.00">Edge</text>
</g>
<!-- chrome&#45;&gt;ie -->
<g id="edge3" class="edge">
<title>chrome&#45;&gt;ie</title>
<path fill="none" stroke="black" d="M320.65,-162.25C335.69,-176.19 354.16,-193.32 370.22,-208.2"/>
<polygon fill="black" stroke="black" points="367.54,-210.5 377.25,-214.73 372.3,-205.36 367.54,-210.5"/>
</g>
<!-- brave -->
<g id="node5" class="node">
<title>brave</title>
<polygon fill="none" stroke="black" points="444.75,-198 372.75,-198 372.75,-144 444.75,-144 444.75,-198"/>
<text text-anchor="middle" x="408.75" y="-166.7" font-family="Times,serif" font-size="14.00">Brave</text>
</g>
<!-- chrome&#45;&gt;brave -->
<g id="edge4" class="edge">
<title>chrome&#45;&gt;brave</title>
<path fill="none" stroke="black" d="M333.97,-148.2C342.93,-150.97 352.41,-153.9 361.49,-156.7"/>
<polygon fill="black" stroke="black" points="360.27,-159.99 370.85,-159.6 362.33,-153.3 360.27,-159.99"/>
</g>
<!-- vivaldi -->
<g id="node6" class="node">
<title>vivaldi</title>
<polygon fill="none" stroke="black" points="447.75,-126 369.75,-126 369.75,-72 447.75,-72 447.75,-126"/>
<text text-anchor="middle" x="408.75" y="-94.7" font-family="Times,serif" font-size="14.00">Vivaldi</text>
</g>
<!-- chrome&#45;&gt;vivaldi -->
<g id="edge5" class="edge">
<title>chrome&#45;&gt;vivaldi</title>
<path fill="none" stroke="black" d="M333.97,-121.8C341.92,-119.34 350.28,-116.76 358.41,-114.25"/>
<polygon fill="black" stroke="black" points="359.31,-117.63 367.83,-111.34 357.24,-110.94 359.31,-117.63"/>
</g>
<!-- opera -->
<g id="node7" class="node">
<title>opera</title>
<polygon fill="none" stroke="black" points="446.25,-54 371.25,-54 371.25,0 446.25,0 446.25,-54"/>
<text text-anchor="middle" x="408.75" y="-22.7" font-family="Times,serif" font-size="14.00">Opera</text>
</g>
<!-- chrome&#45;&gt;opera -->
<g id="edge6" class="edge">
<title>chrome&#45;&gt;opera</title>
<path fill="none" stroke="black" d="M320.65,-107.75C335.69,-93.81 354.16,-76.68 370.22,-61.8"/>
<polygon fill="black" stroke="black" points="372.3,-64.64 377.25,-55.27 367.54,-59.5 372.3,-64.64"/>
</g>
</g>
</svg>
</figure>
<p>So, we can trace the lineage of every major browser today back to an early browser written in the 1990s.</p>
<p>And here's what a request looks like from NCSA Mosaic:</p>
<pre><code class="language-sh">; nc -vv -l 0.0.0.0 -p 8000
Connection from 10.0.2.250:1202
GET / HTTP/1.0
Accept: text/plain
Accept: application/x-html
Accept: application/html
Accept: text/x-html
Accept: text/html
Accept: text/richtext
Accept: application/octet-stream
Accept: application/postscript
Accept: application/mac-binhex40
Accept: application/zip
Accept: application/macwriteii
Accept: application/msword
Accept: image/gif
Accept: image/jpeg
Accept: image/x-pict
Accept: image/tiff
Accept: image/x-xbm
Accept: audio/x-aiff
Accept: audio/basic
Accept: video/mpeg
Accept: video/quicktime
Accept: application/macbinary
Accept: */*
User-Agent:  MacMosaicB6  libwww2.09
</code></pre>
<p>There is an option to use HTTP 0.9, which simply sends:</p>
<pre><code class="language-sh">; nc -vv -l 0.0.0.0 -p 8000
Connection from 10.0.2.250:1153
GET /
</code></pre>
<p>There are some interesting non-standard MIME types here as well, such as <code>application/x-html</code> and <code>text/x-html</code>. There's also <code>application/macbinary</code> for the MacBinary (<code>.bin</code>) format (similar to BinHex referenced above with <code>application/mac-binhex40</code>) used to transfer both data and resource forks of Macintosh application file across the network. We also see formats specific to Microsoft Word and MacWrite II.</p>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>The best resource for AppleTalk is <a href="/resources/pdfs/inside-appletalk.pdf">Inside AppleTalk, second edition</a>. Cisco also has detailed instructions on <a href="https://www.cisco.com/en/US/docs/ios/11_0/access/configuration/guide/acat.html">Configuring AppleTalk routing</a>.</p>
<p><img src="/resources/images/2023-08-04-localtalk-ethernet/appletalk-network.png" alt="AppleTalk Network" />&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>As Apple shifted to IP, it helped to create Bonjour aka <a href="http://www.multicastdns.org/">Multicast DNS (mDNS)</a> or DNS Service Discovery (DNS-SD) which allows computers on a network to advertise services they provide on the network. The <a href="http://www.zeroconf.org/">ZeroConf</a> standardized IPv4LL (IPv4 Link Local addressing), a requirement for fully plug-and-play networking without a DHCP server:</p>
<blockquote>
<p>The IETF Zeroconf Working Group was chartered September 1999 and held its first official meeting at the 46th IETF in Washington, D.C., in November 1999. By the time the Working Group completed its work on Dynamic Configuration of IPv4 Link-Local Addresses and wrapped up in July 2003, IPv4LL was implemented and shipping in Mac OS (9 &amp; X), Microsoft Windows (98, ME, 2000, XP, 2003), in every network printer from every major printer vendor, and in many assorted network devices from a variety of vendors. IPv4LL is available for Linux and for embedded operating systems. If you’re making a networked device today, there’s no excuse not to include IPv4 Link-Local Addressing.</p>
<p>The specification for IPv4 Link-Local Addressing is complete, but the work to improve network ease-of-use (Zero Configuration Networking) continues. That means making it possible to take two laptop computers, and connect them with a crossover Ethernet cable, and have them communicate usefully using IP, without needing a man in a white lab coat to set it all up for you. Zeroconf is not limited to networks with just two hosts, but as we scale up our technologies to larger networks, we always have to be sure we haven’t forgotten the two-devices (and no DHCP server) case.</p>
<p>Historically, AppleTalk handled this very well. Back in the 1980s if you took a group of Macs and connected them together with LocalTalk cabling, you had a working AppleTalk network, without any expert intervention, without needing to set up special servers like a DHCP server or a DNS server. In the 1990s the same was true using Ethernet — if you took a group of Macs and plugged them into an Ethernet hub, you had a working AppleTalk network, using AppleTalk-over-Ethernet. Now that it’s common for computers to have IEEE 802.11 (&quot;AirPort&quot;) networking built-in, you don’t even need cables or a hub.</p>
</blockquote>
&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
<li id="fn:3">
<p>The AsanteTalk is a small metal box with a wall-wart power supply, Ethernet port, and 8-pin DIN port which can either be connected directly to the printer port of a Macintosh, Apple IIGS, or to a printer, or to a LocalTalk 3-pin DIN adapter and used with locking LocalTalk cabling as part of a wider LocalTalk network. The Farallon PhoneNet adapters were popular in this era, and you can use these adapters as well along with standard 4-wire RJ11 terminated phone lines. The <em>Macintosh Troubleshooting Pocket Guide</em> from 2002 answers this question:</p>
<blockquote>
<p>How do I connect my LocalTalk printer to my USB Mac? Printers like the LaserWriter IINT, NTX, F, Personal LaserWriter NT, NTR, 320, LaserWriter Pro 600, 4/600 PS, Select 360, Color StyleWriter 6500, or an HP LaserJet with &quot;M&quot; or &quot;MP&quot; in its name?</p>
<p>To connect these printers to a new Mac, you must use an Ethernet to LocalTalk Bridge:</p>
<ol>
<li>The AsanteTalk Ethernet to LocalTalk Bridge includes everything you need to connect a LocalTalk printer to a new Mac. It works with existing drivers.</li>
<li>If the printer is already connected to a LocalTalk network, you can use Farallon's iPrint LT. The iPrint LT is similar to the AsanteTalk, except that it has a PhoneNet jack instead of a LoclTalk DIN-8 jack. If your existing LocalTalk network has more than eight LocalTalk devices on it, you need a much more expensive bridge, and are better off upgrading to Ethernet all around.</li>
</ol>
<p>One gotcha: Mac OS 10.2 and later no longer support PostScript Level 1 printers -- only PostScript 2 &amp; 3. So your old LaserWriter II NTX and other PostScript Level 1 printers will <em>not</em> work from 10.2 at all.</p>
</blockquote>
<p>More information on the AsanteTalk is available in the <a href="/resources/pdfs/asantetalk-manual.pdf">User Manual</a>. I found this manual on <a href="http://www.marushin-web.com/">Marushin</a>, a website for a Japanese shop which focuses on old Macintosh computers.</p>
<blockquote>
<p>I can't help but fix it. This is the spirit of the 65-year-old shopkeeper.</p>
</blockquote>
&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
<li id="fn:4">
<p>See <a href="https://docs.fedoraproject.org/en-US/fedora/latest/system-administrators-guide/kernel-module-driver-configuration/Working_with_Kernel_Modules/#sec-Persistent_Module_Loading">Persistent Module Loading</a> in the Fedora docs.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>This StackOverflow <a href="https://stackoverflow.com/a/22625555">answer</a> has more info&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>DSI was introduced with MacTCP to enable AppleTalk over TCP and enable IP networking. I've found this PDF of <a href="https://developer.apple.com/library/archive/documentation/mac/pdf/Networking/ADSP.pdf">chapter 5</a> on ADSP of <em>Inside Macintosh: Networking</em>.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>The excellent book <em>Inside AppleTalk</em> covers DDP in chapter 4. I've also found this PDF of <a href="https://developer.apple.com/library/archive/documentation/mac/pdf/Networking/DDP.pdf">chapter 7</a> on DDP of <em>Inside Macintosh: Networking</em>.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>As documented in this <a href="https://www.emaculation.com/doku.php/appletalk_printserver_macos_and_osx#fn__25">footnote</a>, there are several <code>pap</code> backends based on the work of Rupi, which I first discovered reading <a href="https://www.openprinting.org/download/kpfeifle/LinuxKongress2002/Tutorial/VI.CUPS-Connections/VI.tutorial-handout-cups-connections.html">How CUPS talks to Print Servers, Print Clients and Printers</a>. Its link to the <code>pap</code> backend is only available via the <a href="https://web.archive.org/web/20020930194124/http://www.oeh.uni-linz.ac.at/~rupi/">Way Back Machine</a>.&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>I adapted the approach taken by Carpentier Pierre-Francois in thier <a href="https://github.com/kakwa/image-writer">fork</a>. In futher testing, it seems the patch to <code>papstatus.c</code> is unnecessary and reports incorrect status information, the Netatalk 2.x version already returns a human-readable string. There is a bug where CUPS displays discovered network printers, PAP printers will display the status strangely:</p>
<pre><code>ImageWriter II@office (pap) (%%[ status: Processing... ]%%)
</code></pre>
&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
<li id="fn:10">
<p>OpenTransport was an Apple implentation of the UNIX STREAMS networking API, described in <a href="https://developer.apple.com/library/archive/technotes/tn/tn1117.html">Tech Note 1117</a>. Dennis Ritchie wrote in <a href="https://www.bell-labs.com/usr/dmr/www/st.html">A Stream Input Output System</a>:</p>
<blockquote>
<p>Patchwork solutions to specific problems were destroying the modularity of this part of the system. The time was ripe to redo the whole thing. This paper describes the new organization.</p>
</blockquote>
<p>I think this was in response to approachs such as <a href="https://docs.freebsd.org/en/books/developers-handbook/sockets/">BSD Sockets</a>. STREAMS was further iterated on in <a href="http://doc.cat-v.org/plan_9/4th_edition/papers/net/">Plan 9</a>.&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>A copy of the original software: <a href="https://www.macintoshrepository.org/9754-apple-ip-gateway-1-0-1">Apple IP Gateway 1.0.1</a>&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-07-27-interpress</id>
    <title>PostScript and Interpress: A Comparison</title>
    <author><name>Brian Reid</name></author>
    <link href="https://connor.zip/posts/2023-07-27-interpress" />
    <published>2023-07-27T00:00:00-05:00</published>
    <summary>A 1985 essay comparing two similar printer languages of the era: PostScript and Interpress</summary>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Below I've reproduced the essay by Brian Reid, written in 1985 in an email via ARPA-net.</p>
<p>The original from line:</p>
<pre><code>From: Brian Reid &lt;reid@Glacier&gt;
</code></pre>
<p>and signature</p>
<pre><code>Brian Reid Re...@SU-Glacier.ARPA
Computer Systems Laboratory decwrl!glacier!reid
Stanford University 415/323-6100
</code></pre>
<p>give some insights into how ARPA-net was used at the time. The original is preserved on <a href="https://groups.google.com/g/fa.laser-lovers/c/H3us4h8S3Kk">google groups</a>:</p>
<blockquote>
<p>This essay offers a comparison of two modern schemes for controlling what laser printers print. One scheme, called PostScript, is offered by Adobe Systems, Inc.; the other scheme, called Interpress, is offered by the Xerox Corporation. A discussion of these two schemes has provoked a considerable amount of interest in this forum recently. I have for some time been promising (threatening?) to provide my interpretation of the difference between the two systems. It is long enough and detailed enough that you will certainly never want to read another word on the topic after you read it, but given the nature of computer mail systems you almost certainly will be given the opportunity.</p>
<hr />
<p>To a first order, PostScript and Interpress are indistinguishable. What I mean by that is that by comparison with all other current techniques for page image representation, the two can be considered to be nearly identical. I believe that it is worth looking at how they got to be that way; their similarities and differences can best be understood with a proper historical perspective.</p>
<h1 id="part-i-history">Part I: History</h1>
<p>The Evans and Sutherland Computer Corporation has for quite a number of years sold very expensive, very powerful graphics devices for CAD/CAM and for real-time simulation. The CAD/CAM machine is called The Picture System; the simulation machines are custom-built for each application. Custom simulation graphics machines are used for such purposes as providing the windshield graphics for military flight simulation systems--emulating what a pilot would see if he were looking out the window of a real airplane. These graphics systems use a very clever graphics model, developed by Ivan Sutherland and others, which is based on coordinate system transformations and line drawing.</p>
<p>Although the Evans and Sutherland company is primarily in Salt Lake City, they had a small research office in Mountain View (California) in the early 1970's. John Warnock was in charge of it, and John Gaffney worked for Warnock. One of the activities of the Mountain View office was to develop software for producing 3-dimensional graphical databases both for the Picture System and for the simulation machines. Working with Warnock, Gaffney had by 1975 programmed and documented and released the first version of a programming language that was called &quot;The Evans and Sutherland Design System&quot;.</p>
<p>Gaffney came to E&amp;S from graduate school at the University of Illinois, where he had used the Burroughs B5500 and B6500 computers. Their stack-oriented architectures made a big impression on him. He combined the execution semantics of the Burroughs machines with the evolving Evans and Sutherland imaging models, to produce the Design System. Like all successful software systems, the Design System slowly evolved as it was used, and many people contributed to that evolution.</p>
<p>John Warnock joined Xerox PARC in 1978 to work for Chuck Geschke. There he teamed up with Martin Newell in producing an interpreted graphics system called JAM. &quot;JAM&quot; stands for &quot;John And Martin&quot;. JAM had the same postfix execution semantics as Gaffney's Design System, and was based on the Evans and Sutherland imaging model, but augmented the E&amp;S imaging model by providing a much more extensive set of graphics primitives. Like the later versions of the Design System, JAM was &quot;token based&quot; rather than &quot;command line based&quot;, which means that the JAM interpreter reads a stream of input tokens and processes each token completely before moving to the next. Newell and Warnock implemented JAM on various Xerox workstations; by 1981 JAM was available at Stanford on the Xerox Alto computers, where I first saw it.</p>
<p>In the meantime, various people at Xerox were building a series of experimental raster printers. The first of these was called XGP, the Xerox Graphics Printer, and had a resolution of 192 dots to the inch. Xerox made XGP's available to certain universities, and by 1972 they were in use at Carnegie-Mellon, Stanford, MIT, Caltech, and the University of Toronto. Each of those organizations produced its own hardware and software interfaces. The XGP is historically interesting only because it is the first raster printer to gain substantial use by computer scientists, and was the arena in which a lot of mistakes were made and a lot of lessons learned.</p>
<p>To replace the XGP, Xerox PARC developed a new printer called EARS, and then another newer printer called Dover. After the agony of converting software from XGP to EARS, various Xerox people realized that applications programs generating files for the XGP or for EARS should not be tied to the device properties of the printer itself. Bob Sproull and William Newman, of Xerox PARC, developed a relatively device-independent page image description scheme, called &quot;Press format&quot;, which was used to instruct raster printers what to print.</p>
<p>As part of an extensive grant program to selected universities, Xerox donated Dover printers and made documentation of the Press format available under a nondisclosure agreement. As far as I know, that nondisclosure agreement has never been lifted, though information about Press format has been widely enough distributed that by 1982 researchers at the Swiss Federal Institute of Technology (EPFL) at Lausanne had given conference papers about their own independent implementation of Press format.</p>
<p>Press format was a smashing success; it revolutionized laser printing technology in the academic and research communities, and stimulated a large number of people to think about issues of device-independent print graphics. Nevertheless, Press format had its limitations, and various people felt the need to revise the basic design.</p>
<p>Sproull left Xerox in 1978 to become a professor of computer science at CMU. Newman returned home to England to become an independent consultant. Martin Newell left Xerox to join Cadlinc Corp. Warnock and Geschke remained at Xerox.</p>
<p>While at CMU, Sproull began making plans for a new version of Press that would combine the graphics model of JAM with the page image description properties of Press. Sproull returned to Xerox for a sabbatical leave in 1982, and enlisted the help of Butler Lampson in the creation of the new page image description language that Warnock dubbed &quot;Interpress&quot;. The name caught on.</p>
<p>While it is difficult to separate the contributions made by Sproull and Lampson, it is not incorrect to say that Lampson and Warnock produced the execution model of Interpress while Sproull and Warnock produced the imaging model. It is also approximately correct to characterize this first version of Interpress as being derived from the graphics model and execution model of JAM with additional protection and security mechanisms derived from experience with programming languages like Euclid and Cedar, and a careful silence on the issue of fonts. The trio worked under Geschke's direction, and Geschke was responsible for refereeing disagreements and for making certain that the resulting design was acceptable to the rest of Xerox.</p>
<p>My own involvement with the Interpress effort is difficult to explain. Sproull was my thesis adviser at CMU; we had discussed many of the issues in page description languages at length. As a consultant to PARC during the Interpress design work, my primary activity was one of writing or rewriting the Interpress materials. I also represented a &quot;consumer&quot; point of view rather than a &quot;designer&quot; point of view, and often complained about aspects of the evolving language.</p>
<p>I feel uncomfortable discussing the issues involved in the transition of Interpress from an artifact of the research lab to a marketable product. I shall therefore not discuss them. During this transition phase Geschke and Warnock left PARC (December 1982) to start Adobe Systems, Sproull returned to CMU (June 1983), and Lampson left PARC to join DEC Research (November 1983).</p>
<p>Warnock had various philosophical differences with the final Interpress design, and he voiced those differences to the rest of the Interpress group at every opportunity. At Adobe, Geschke and Warnock saw the opportunity to try again, with a design group composed of people who shared his ideology. They enlisted Doug Brotz, a Xerox PARC researcher who had had no involvement with any of the Press/JAM/Interpress world, to join them in developing a new page description language named PostScript, based on combining the execution model and imaging model of JAM with a protection structure more reminiscent of C or the Unix shell than of Euclid or Cedar. While not at all a copy of JAM, PostScript resembles JAM more than it resembles Interpress. PostScript also embraced various Unix notions, such as the use of text streams to convey information.</p>
<p>On March 15, 1984, Adobe shipped its first PostScript manual to a potential customer. That PostScript manual was printed on a PostScript printer using a Times Roman font licensed from Allied corporation and digitized by Adobe.</p>
<p>At that time all aspects of the Interpress project were still very proprietary, and it appeared to me that Xerox had no interest in releasing them. However, on April 25, 1984, I received a Xerox press release announcing the availability of Interpress documentation. I finally managed to get my hands on a copy of the Interpress documentation in February of 1985, and was quite surprised to discover that the Interpress documentation had not been printed on an Interpress printer, but was instead printed on a Press format printer, using the same Times-like and Helvetica-like fonts that I had become familiar with at CMU and Stanford on the Dover printers.</p>
<h1 id="part-ii-comparison">Part II: Comparison</h1>
<p>Part I outlined the history of PostScript and of Interpress, as I have been able to determine it. With that historical background, I now offer a comparison of the two languages.
While there are quite a number of extant schemes for the description of printed images, most of them are better described as &quot;data structures&quot; than as &quot;languages&quot;. In particular, only PostScript and Interpress are directly executable.
Languages can be compared at several different levels. Languages have a lexical representation, a syntax, a semantic model, an intended style of usage, and implementation considerations.</p>
<h2 id="lexical-considerations">Lexical Considerations</h2>
<p>The lexical properties of a language define the way the tokens of the language are represented in terms of bits, bytes, or characters. The FORTRAN language was defined in terms of a particular character set, which the implementor was expected to use. The ALGOL language was defined in terms of keywords and symbols, and the language definition left the implementor free to choose how he would represent those keywords in terms of characters available on his computer. For example, the FORTRAN definition of a &quot;DIMENSION&quot; statement is that it is the letter &quot;D&quot; followed by the letter &quot;I&quot; followed by the letter &quot;M&quot;, etc. The ALGOL definition of the &quot;BEGIN&quot; keyword was merely that it was a keyword; the ALGOL standard document used boldface to identify keywords. When ALGOL is implemented on computers whose character sets include boldface, the implementors normally use the boldface characters as a way of identifying keywords. When ALGOL is implemented on other computers, the implementors choose other schemes for identifying keywords, such as putting them in quotes or putting them in all capital letters.</p>
<p>Both PostScript and Interpress have an operator called MOVETO, and in both languages it does exactly the same thing, which is identical to what the MOVETO operator did on the Evans and Sutherland hardware that spawned this graphics model. Let's look at how that operator would be represented in the two languages.</p>
<p>The PostScript language is defined in terms of characters, like FORTRAN. The definition of the PostScript operator &quot;MOVETO&quot; is the letter &quot;M&quot; followed by the letter &quot;O&quot; followed by the letter &quot;V&quot;, etc. The Interpress language is defined in terms of keywords; the definition of the Interpress operator &quot;MOVETO&quot; is that it is a keyword in the ALGOL sense. The Interpress 2.1 standard suggests that MOVETO can be represented with the serial number 25 in a standard encoding that the standard provides, but the definition of the MOVETO keyword is independent of the choice of encoding.</p>
<p>Since PostScript is defined in terms of sequences of characters, it is always possible to assume that a PostScript file can be transmitted over any link capable of sending characters, and can be stored in any device capable of holding characters. Since Interpress is defined more abstractly, it is not necessarily possible to make any assumptions at all about a particular Interpress file. However, any Interpress encoding can be translated into any other Interpress encoding, so it is always possible to take an Interpress file and translate it into a stream of characters which will then have properties identical to PostScript's. Conversely, it is always possible to translate a PostScript program into a tokenized keyword form, though the PostScript standard does not suggest any particular tokenization scheme.</p>
<p>It is worth mentioning that the word &quot;token&quot; is slightly overloaded here. A &quot;tokenization scheme&quot; is a means of doing data compression, wherein a sequence of characters is called a &quot;token&quot; and is replaced by a token number, which will occupy less space. However, a language can have tokens without having a tokenization scheme. Both PostScript and Interpress have an execution semantics that is defined in terms of things called &quot;tokens&quot;. The Interpress tokens are normally represented by tokenization schemes--i.e. replaced with integers--while the PostScript tokens are normally left as sequences of characters. In later sections of this message the word &quot;token&quot; will be used to mean either the PostScript kind of token or the Interpress kind of token; by the time they get to the interpreter they are roughly the same thing.</p>
<p>The Interpress 2.1 standard defines a particular encoding of Interpress, and gives bit and byte formats, decimal integer operator numbers, and so forth. This encoding is a full binary encoding, using all 8 bits of each byte, which means that it cannot always be sent over a serial character link. The Interpress standard encoding of a page description normally occupies a smaller number of bytes than the equivalent PostScript character representation. This is possible because binary encodings make more efficient use of the bits.</p>
<p>Interpress files are clearly intended to be transmitted via XNS protocols over Ethernet. In its current form, without further processing or re-encoding, Interpress is not suitable for transmission over character-protocol lines. PostScript files are clearly intended to be transmitted over character-protocol lines. Like all character stream protocols, PostScript can also be transmitted over Ethernet, but a PostScript file will use more bytes than the corresponding Interpress file.</p>
<p>Text files such as PostScript sources are highly redundant (i.e. they make inefficient use of their bits) and can be run through data compression programs (such as the Unix &quot;compact&quot; program) to reduce the amount of space they occupy in storage and during transfer. Data compression techniques will probably not yield much further compression of Interpress files, because the information is already quite tightly packed. After compression of both, the PostScript and Interpress representations of an image will likely occupy approximately the same number of bits.</p>
<h2 id="syntactic-considerations">Syntactic Considerations</h2>
<p>The syntactic issues (or issues of syntax, if you will) of a language are the means by which an interpreter for the language distinguishes variables from operators from constants from function calls from quoted strings, and by which it determines whether or not a certain sequence of characters or tokens is in fact a &quot;legal&quot; construct in the language.</p>
<p>As languages in general go, both PostScript and Interpress are remarkably free of syntax. As token-oriented postfix languages, each token of the language is &quot;executed&quot; as soon as it is identified, and that execution will either succeed or fail depending on the state of the execution environment at that point.
Nevertheless, both languages have a small amount of syntax, though they differ radically in the nature and application of this syntax. In fact, the primary area in which the PostScript language and the Interpress language are incontrovertibly and irrevocably different is in their syntax.</p>
<p>As explained above (Lexical Issues) PostScript is defined in terms of character sequences. A PostScript program is a series of character tokens, separated by white space characters. That program is fed to an interpreter to be executed; the interpreter reads in the characters and assembles them into words (i.e. tokens), then looks up the tokens in dictionaries to determine their meaning. In this regard PostScript is similar to many other programming or command languages: if the PostScript interpreter sees the command &quot;MOVETO&quot;, it finds the current definition of that string, and then performs whatever action is requested in that definition.</p>
<p>By contrast, Interpress is defined in terms of byte codes, which behave more like the instruction codes of a hardware interpreter than like a traditional programming language. Instead of the letters &quot;MOVETO&quot;, an Interpress file will have a byte whose binary value is 25; the number 25 is then used to index an operation code table which directs the interpreter to the program implementing the MOVETO operation.</p>
<p>The byte codes of Interpress can be viewed as a compiled form of the character codes of PostScript. One could imagine a translator that passed over a PostScript file, looked up each name, and produced an output file whose contents was the binary identification of the thing found during the lookup. In fact, the Interpress standard document explains that the two forms are equivalent, and the Introduction to Interpress document explains how to write a program to convert one to another.</p>
<p>There is, however, a crucial difference between the PostScript and Interpress naming schemes that makes them very different, and makes impossible the above-mentioned imagined compiler to translate PostScript into Interpress. That difference is best understood as a semantic difference, and will be explained in the next section.</p>
<p>Returning to syntactic issues, an Interpress file has what is called &quot;static structure&quot; or &quot;lexical structure&quot;. This means that you can look at an Interpress file and make structural assumptions about what you find there. For example, an Interpress file is defined to be a sequence of &quot;bodies&quot;; each body is a sequence of operators and operands. The first body is the &quot;preamble&quot;, or setup code; all following bodies correspond to printed pages. If an Interpress file has 11 bodies, then it will print as 10 pages.</p>
<p>By contrast, a PostScript file has no fixed lexical structure; it is just a stream of tokens to be processed by the interpreter. PostScript prints a page whenever the SHOWPAGE operator is executed. If a PostScript file contains a loop from 1 to 10, with a SHOWPAGE operator inside the loop, then it will print 10 pages even though there is only one actual call to SHOWPAGE in the file. However, since PostScript is a textual language, and since it has a &quot;comment&quot; facility like the C /<em>....</em>/ or Pascal {...}, it is possible for the creator of a PostScript file to represent whatever additional information is desired. It is a slight misnomer to call this a comment facility, because the normal use of the word &quot;comment&quot; in programming languages implies that the contents of the comment are irrelevant. PostScript comments are irrelevant in the sense that they do not affect the image produced by a PostScript file, but they do convey machine-readable information about the structure of the document.</p>
<p>A PostScript client is free to choose any structuring scheme that he wants, and the tool that he has available to implement this structuring scheme is the PostScript comment. There is a particular &quot;standard&quot; structuring convention documented along with PostScript by which page boundaries and other lexical information can be marked. A PostScript file that follows that convention is called a &quot;conforming&quot; file, but it is a convention and not a rule; the printed image produced by a nonconforming PostScript file will be identical to that produced by the equivalent conforming PostScript file. Conversely, the structure of a PostScript file, as represented by the structuring convention, is completely independent of the appearance of the page images--the actual PostScript text appears to be a series of comments as far as the structuring systems are concerned.</p>
<p>The technique of mixing two different languages in one file, so that a processor for one language sees the text of the other language as comments, is not new. Perhaps the most widely-known instance of this scheme is Don Knuth's &quot;WEB&quot; system, in which Pascal and TEX are woven together in such a way that the Pascal program looks like a comment to the TEX interpreter and the TEX source looks like a comment to the Pascal compiler.</p>
<p>This absence of fixed lexical structure in PostScript is a two-edged sword. On the one hand, it offers more flexibility in creating page images, especially repetitive ones; on the other hand, it provides more opportunities to make mistakes.</p>
<p>One final syntactic issue is perhaps worth mentioning, though it could also be considered a semantic issue. Interpress does not support &quot;variables&quot; so much as it supports &quot;registers&quot;, in the hardware sense. All storage in Interpress is accessed by address and not by name. What would be called a &quot;local variable&quot; in a programming language is represented in Interpress by an integer subscript into the procedure's frame. All programming languages must ultimately reduce their variable names into memory locations; Interpress asks that this translation be performed by the creator of the Interpress file and not by the interpreter. An obvious benefit of this approach is efficiency--no name lookups need be performed as the file is being printed. An obvious drawback of this approach is the restricted name space available to the programmer and the extra care that must be taken to manage addresses instead of names. By contrast, PostScript supports ordinary named variables.</p>
<h2 id="semantics">Semantics</h2>
<p>Since both Interpress and Postscript derive their semantics from the same source, it stands to reason that the semantics would be similar. Both use similar graphical semantics, the same imaging model, and both use very similar execution semantics. The differences are minor, though one could imagine that the consequences of those differences might be major.</p>
<p>There are two substantive differences between the graphical semantics of PostScript and Interpress 2.1, namely that Interpress has no facility for describing curves, and the Interpress standard is completely silent on the issue of fonts.</p>
<p>A curve can of course be approximated with a series of line segments, and if the line segments are short enough the resulting appearance will be identical, but many classes of curved lines, such as those appearing in fonts, can be described very succinctly in terms of the PostScript CURVETO operator while requiring a tedious collection of short line segments to describe in Interpress. Because of the importance of fonts to printed images, this seemingly minor omission could possibly have major consequences.</p>
<p>On the issue of fonts, the Interpress standard states only that a font is an operator that will be executed for you when appropriate, and that the operators for that font are defined &quot;in the Environment&quot;. A PostScript font is just an ordinary PostScript defined operator, and the PostScript manual gives explicit instructions for creating user-defined fonts and making those font definitions be part of a PostScript file. One could imagine that it is possible to write an Interpress composed operator (in Interpress, of course) to behave like a user-defined font, but the Interpress implementations do not currently have any mechanism for recognizing that an operator is in fact a user-defined font and should therefore receive any kind of special treatment. This is not a deficiency in Interpress, just a silence, accompanied by a deficiency in current implementations (this and other implementation issues are discussed in the last section).</p>
<p>There are three consequential differences between PostScript execution semantics and Interpress execution semantics: user-defined operators, the nature of the &quot;firewalls&quot; between pieces of the program, and error recovery.</p>
<p>In Interpress, a user-defined operator is syntactically different from an intrinsic operator, and requires an explicit &quot;DO&quot; operator to call it. In PostScript a user-defined operator is syntactically identical to an intrinsic operator, and in fact any intrinsic operation can be redefined by simply making a new entry for that operator's name in the appropriate dictionary. This is stylistically similar to the difference in lexical structure: Interpress guarantees that if a byte code 25--the MOVETO operator--is found in a file, that it will when executed perform a standard MOVETO. PostScript guarantees nothing because it enforces nothing. If you want to redefine the meaning of MOVETO, then you can do so, and when the characters &quot;M O V E T O&quot; are found in a PostScript file, the redefined operator will be executed instead. To execute a PostScript user-defined operator you just include its name, the same way you execute any other operator. To execute an Interpress user-defined operator, you execute the DO operator (or a variation of it), after pushing onto the stack the thing that you want to execute.</p>
<p>Analogously with the static structural issues, The PostScript user-defined-operator scheme offers more flexibility than Interpress but carries with it more dangers. Like the old saw about giving one enough rope to hang himself, the additional flexibility of the PostScript scheme requires discipline on the part of the user. Furthermore, just as PostScript has a convention for the voluntary inclusion of static structure in a file, it has a mechanism by which a PostScript program can reference the true built-in version of an operator and not the current, possibly user-redefined, version of an operator. From the point of view of language design, this scheme is not terribly elegant, but it is quite practical, as it provides a mechanism for the solution of all of the problems associated with operator redefinition and the prevention thereof.</p>
<p>It is this ability to redefine builtin operators that makes the compilation of a textual Postscript file into an encoded Interpress file (mentioned above under Syntax) impossible. A static analysis cannot determine the operator that will be executed when the textual token is interpreted. By contrast, it is easy to translate Interpress into PostScript, because all of Interpress' semantic capabilities have direct equivalents in PostScript, and the lexical translation is straightforward.</p>
<p>Interpress has a distinction between &quot;bodies&quot; and &quot;operators&quot;. A &quot;body&quot; is a sequence of Interpress tokens. The Interpress operator &quot;MAKESIMPLECO&quot; (make simple composed operator) translates a body into an operator. Like all other Interpress operators that reference bodies--referred to in the Interpress standard as &quot;body operators&quot;--the MAKESIMPLECO operator is prefix and not postfix. This was done to make it easier for small computers to implement Interpress interpreters; it has the interesting side-effect of making it impossible for an Interpress program to generate and then execute a piece of Interpress source code. I would guess that the entire reason for the distinction between Interpress bodies and operators is to enable a clean prefix implementation of body operators while at the same time permitting the more conventional postfix use of expressions of type &quot;operator&quot;.</p>
<p>By contrast, PostScript represents operator bodies as arrays of PostScript tokens. The PostScript lexical scanner processes a body by building an array out of the tokens that it finds in the input stream; that body is then handled as an ordinary data value in the language, and it can be stored into variables, executed, modified, searched or searched for, etc. The translation of a body into something like an Interpress operator consists merely of returning the address where the body is stored; that can be handled by the PostScript type system and does not require a special conversion operator. Consequently, a PostScript program is able to generate an array of PostScript operators, however it so chooses, and then declare that array to be a new PostScript operator and have it be executed just like any other PostScript operator.</p>
<p>The second important semantic difference between PostScript and Interpress is the set of mechanisms that they offer for protecting one piece of the file from side effects in another. As you might be able to guess if you have read this far, the Interpress protection mechanism is static and mandatory while the PostScript protection mechanism is dynamic and optional. This kind of mechanism is often referred to as a &quot;firewall&quot;.</p>
<p>An Interpress file consists of a series of bodies. Each body is executed completely independently of each other body. In particular, at the beginning of each page body, the execution environment is restored to the state that it had at the end of execution of the preamble, so that each page body is executed as if it were the only page in the document. There is absolutely nothing that the code in one Interpress page can do that will have any effect on the execution of the code in any other Interpress page, and the Interpress language guarantees that independence. This permits, for example, the pages to be executed or printed in any order, front to back or back to front, or in folios of 16 pages at a time, with complete confidence that the appearance of the pages will not change.</p>
<p>By contrast, a PostScript file has no static structure, so there is no convenient place to build automatic firewalls. PostScript provides, instead, two pairs of operators by which a PostScript user can build his own firewalls wherever he wants them. There is an operator called SAVE, and another operator called RESTORE. The RESTORE operator restores the execution state of the machine back to what it was when the last SAVE operator was executed. Thus, if a PostScript user wants to have pages that are firewalled against each other, then he puts a SAVE operator at the beginning of the page and a RESTORE operator at the end of the page. If the PostScript user wants to play tricks, and build PostScript files that do bizarre things with the execution state between pages, he is free to do so by leaving out the SAVE and RESTORE.</p>
<p>By now you can probably see the fundamental philosophical difference between PostScript and Interpress. Interpress takes the stance that the language system must guarantee certain useful properties, while PostScript takes the stance that the language system must provide the user with the means to achieve those properties if he wants them. With very few exceptions, both languages provide the same facilities, but in Interpress the protection mechanisms are mandatory and in PostScript they are optional. Debates over the relative merits of mandatory and optional protection systems have raged for years not only in the programming language community but also among owners of motorcycle helmets. While the Interpress language mandates a particular organization, the PostScript language provides the tools (structuring conventions and SAVE/RESTORE) to duplicate that organization exactly, with all of the attendant benefits. However, the PostScript user need not employ those tools.</p>
<p>Before taking a stand on this issue, you must remember that neither Interpress nor PostScript is engineered to be a general-purpose programming language, but rather to be a scheme for the description of page images, so it is not necessarily valid to apply programming language lore to these two systems.</p>
<p>The third area in which there are significant semantic differences between PostScript and Interpress is in error handling and error recovery. The Interpress 2.1 standard is slightly vague as to what happens when various error conditions occur; one assumes that the implementors of Interpress printers will do something reasonable. The PostScript language provides a user-extensible error-recovery mechanism that is keyed on PostScript's ability to redefine intrinsic operators. Whenever an error of any kind occurs in PostScript, be it the printer out of paper, the file asking for a font that doesn't exist, or a division by zero, the PostScript interpreter responds by executing an &quot;error operator&quot;. If the error operator has not been redefined, then some standard action is taken; sometimes the standard action is to do nothing, while sometimes the standard action is to abort or to retry. The standard action is merely the execution of the error operator.</p>
<p>The Interpress documentation does not offer much explanation, one way or another, of error handling. The Interpress standard describes certain kinds of error conditions that can occur, such as &quot;appearance error&quot; or &quot;master error&quot;, but does not specify exactly what will happen if those errors occur. I assume that the reason the standard is vague is to provide leeway to the implementors in error handling. The Interpress language standard does not describe any technique by which an Interpress master can control or modify the error recovery actions.</p>
<p>When a PostScript error occurs, an error operator is executed. There is a set of built-in error operators provided as part of PostScript, and documented like all other operators. If a PostScript user wants to change the error handling of a PostScript printer, he simply changes the dictionary entry for the relevant error operator. Depending on the relative position of that redefinition with respect to SAVE and RESTORE operators in the PostScript file, the redefinition will have a certain lifetime. A SAVE and RESTORE pair is wrapped around each separate file printed by a PostScript printer, so that the redefinition does not carry over to other jobs. The manager of an installation can change the overall default of the printer by sending it a redefinition, during printer startup, before entering the SAVE/RESTORE loop around each print job.</p>
<p>Like so much of PostScript's flexibility, the ability to redefine operators is a two-edged sword. Redefining an operator can be used to advantage by clever and knowledgeable users, and it can be used as a technique for fixing bugs in a PostScript implementation. For example, if an accounting package were not provided as part of a PostScript implementation, the owners of a PostScript printer could add page accounting to their printer by downloading a redefinition of the SHOWPAGE operator that kept accounting information. However, a user might be able to disable that accounting by doing yet another redefinition that disabled the installation's accounting. To circumvent this class of problem, PostScript provides a mechanism for declaring certain objects to be read-only, or execute-only. The management of a shared PostScript printer can specify that part of its power-up or restart sequence is to load a configuration file; that configuration file can redefine certain operators--for the purpose of bug fixing or accounting or any other reason--and then, if desired, mark the redefined operators read-only so that they cannot be further redefined. As a language mechanism this is very clumsy, but as an operational technique it is effective.</p>
<h2 id="implementation-issues">Implementation Issues</h2>
<p>The implementation considerations are the most difficult to review and compare, because it is next to impossible to determine the reason for some annoying property of an implementation; it is also not entirely proper to criticize a language for the state of its implementation. Nevertheless, the history of programming languages has repeatedly shown that good implementations of languages have longer-lasting impact than good designs. For example, I quite commonly encounter people who choose to run VMS on their Vax systems instead of Unix and who offer the explanation that they do this because the VMS implementation of Fortran is so good that their programs will run a lot faster. Naturally, other people have other reasons; this is just an example.</p>
<p>The Interpress documentation is peppered with &quot;fine print&quot; explaining the possible limitations of various possible Interpress printers, and a chapter of the Interpress standard is devoted to a discussion of the various ways to subset Interpress so that stripped-down versions of the language can be implemented. Indeed, as of today (March 1, 1985) I am not aware of the existence of any printer that implements the full Interpress 2.1 language defined in the standard. Certainly none is offered now as a product, and if one has been announced the announcement has not yet reached me. The Xerox 8044 &quot;Star&quot; printer and the 5700 and 2700 printers all implement various subsets of Interpress. Perhaps there are others. The only one of these that I have used to any extent is the 8044. It implements a textual subset of Interpress, with the capability of a certain amount of line graphics, and has some unknown capacity for more sophisticated graphics. It does not implement very many of the features that distinguish Interpress from the older Press format, and in fact has some surprising limitations. For example, Interpress provides the ability to get rounded ends on line segments. The 8044 implementation of Interpress that I experimented with faked the circular arcs with sections of a 9-sided polygon. The Interpress standard promises the ability to rotate the coordinate system through arbitrary angles; all of the existing implementations of Interpress limit coordinate system rotations to multiples of 90 degrees.</p>
<p>Xerox quite likely has been developing true Interpress printers, which implement the full documented language, but none has been demonstrated or announced.</p>
<p>By contrast, the PostScript documentation makes no mention of any subset, or of any implementation restrictions. The entire PostScript language was fully implemented before any PostScript documentation was distributed or any printers shipped. There are four PostScript printers announced and demonstrated by three OEM vendors: the Apple LaserWriter (300 dots/inch) the QMS 1200A (300 dots/inch), the Mergenthaler P300 phototypesetter (2540, 1270, or 635 dots/inch), and the Mergenthaler P101 phototypesetter (1270 or 635 dots/inch). The Apple printer has been shipped to customers, the QMS printers are in Beta test, and the Mergenthaler machines will be shipped to customers by Fall of 1985.</p>
<p>All implementations of PostScript printers can print any PostScript file, with no restrictions save the availability of fonts as licensed to that manufacturer. Circles come out as circles. A PostScript file that has been proof-printed on an Apple LaserWriter can be typeset on a Mergenthaler P101 without making any changes to the file. Naturally all device-independent page representation schemes have this ability as their goal, and many claim to be able to do it, or claim that they could do it if they had all of the necessary fonts available in all of the requisite sizes. The current set of PostScript printers actually do it.</p>
<p>Given that Xerox has been working on Interpress for about twice as long as Adobe has been working on PostScript, and many of the graphics techniques necessary for the implementation are copiously described in the open literature, I find it surprising that there are no true Interpress printers on the market. I am puzzled by this, and as a student of programming languages I am very interested in learning whether or not there are any properties of the Interpress language itself that are somehow contributing to this difficulty, or whether this is just the usual sluggishness that one expects from all large companies.</p>
</blockquote>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-07-27-hp-designjet-650c</id>
    <title>Printing like it's 1995</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2023-07-27-hp-designjet-650c" />
    <published>2023-07-27T00:00:00-05:00</published>
    <summary>Fixing up and printing to an HP DesignJet 650C</summary>
    
    <media:content url="https://connor.zip/resources/images/2023-07-27-hp-designjet-650c/unplotter.jpg" medium="image" width="605" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>I found both of my printers second-hand at Goodwill. At twenty and nearly thirty years old, it's amazing they still operate and through open source software that we can print to them from modern hardware. They are:</p>
<ul>
<li>An HP LaserJet 4100N laser printer. Released in 2004, it prints black and white documents at a stunningly crisp 1200x1200 DPI.</li>
<li>An HP DesignJet 650C large format inkjet plotter. Released in 1995, it prints from a 36&quot; roll at 300x300 DPI color and 600x600 DPI black-and-white.</li>
</ul>
<p>With some searching on eBay I was able to equip each with JetDirect network cards, and added a duplexer to the LaserJet. Network cards are not required, you can instead install CUPS on a machine with a serial, parallel, or USB connection to your printer. I choose to network them because it simplifies hosting a print server on a VM and allows me flexibility in where they're located.</p>
<ul>
<li>
<p>The DesignJet supports MIO <a href="https://en.wikipedia.org/wiki/JetDirect#MIO">cards</a>, the most recent being the JetDirect 400N cards released in 2000. The preferred card is the J4100A which supports 10/100 BASE-TX Ethernet (RJ45), aka <em>Fast Ethernet</em>, and 10 BASE-2 (BNC).</p>
</li>
<li>
<p>The HP LaserJet supports EIO <a href="https://en.wikipedia.org/wiki/JetDirect#EIO">cards</a>. I have the 635n installed which provides gigabit ethernet and IPv6 support alongside a j4135a HP Connectivity Card which provides USB, serial, and LocalTalk ports.</p>
<figure>
<img src="/resources/images/2023-07-27-hp-designjet-650c/hp-jetdirects.jpg" alt="HP EIO JetDirect Cards" />
<figcaption>HP EIO JetDirect Cards</figcaption>
</figure>
</li>
</ul>
<p>The HP DesignJet 650C was almost in working order when I found it at Goodwill. After soaking the ink cartridges in hot water and wiping their ports with alcohol, removing the part of the roller that introduces friction so that it could roll freely, and oiling the print head rail, it was able to produce beautiful 36&quot; wide inkjet prints. Below is a close-up of a print of the <a href="https://www.ardot.gov/divisions/transportation-planning-policy/gis-mapping/arkansas-state-highway-tourist-map/">Highway Map of Arkansas</a> which I was able to print scaled up while maintaining its resolution. You can see the 300x300 color DPI and the 600x600 DPI blacks:</p>
<figure>
<img src="/resources/images/2023-07-27-hp-designjet-650c/map.jpg" alt="Close up of Highway Map of Arkansas print" />
<figcaption>Close up of Highway Map of Arkansas print</figcaption>
</figure>
<p>Maps are ideal for this printer, which was expected to be used mostly for drafting prints. The paper roll that it had when I found it is very thin paper that won't hold lots of ink well. I was also able to find some appropriate new old stock ink cartridges at the Goodwill Computer Store which I have in case I run out of ink.</p>
<p>Below is an ad for the HP DesignJet 650C which appeared in the July 1995 issue of InfoWorld magazie, with a price tag of $8,595 for the 24&quot; plotter and the add-on PostScript support ROM.</p>
<figure>
<img src="/resources/images/2023-07-27-hp-designjet-650c/unplotter.jpg" alt="An ad for the HP DesignJet 650C" />
<figcaption>An ad for the HP DesignJet 650C</figcaption>
</figure>
<p>Although some still utilize this printer via <a href="https://www.myolddesignjet.com/drivers.html">modern Windows</a>, it was designed to work with Windows NT. The MIO network card even has a web interface that utilizes Java Applets, which only works on Netscape 4.0.3 (not 4.0.4), or IE 4. I was able to spin up a Windows NT VM on VMWare and install Netscape to see it in action:</p>
<figure>
<img src="/resources/images/2023-07-27-hp-designjet-650c/designjet-web-ui.jpg" alt="The HP DesignJet 650C Web UI" />
<figcaption>The HP DesignJet 650C Web UI</figcaption>
</figure>
<p>Note the antiquated supported protocols including <em>EtherTalk</em>, a version of AppleTalk using Ethernet as the physical layer instead of LocalTalk. AppleTalk's auto-configuration features were the basis for Bonjour aka mDNS/DNS-SD.</p>
<p>Installing Windows NT in VMWare was a bit challenging, I used <a href="https://archive.org/details/winnt40_x86en_entsrv.d1">Windows NT 4.0 Enterprise Server</a> and was able to get mouse and better video support using <a href="https://packages.vmware.com/tools/esx/3.5latest/windows/x86">VMWare tools for Windows on x86 version 3.5</a>.</p>
<h1 id="drivers">Drivers</h1>
<p>In <a href="/posts/2023-06-08-airprint-with-cups">AirPrint with CUPS</a>, I discuss setting up CUPS and Avahi to support driverless printing from devices like an iPhone, there I use the GhostScript driver. The OpenPrinting <a href="https://www.openprinting.org/printer/HP/HP-DesignJet_650C">page</a> mentions that the GhostScript driver has issues with color, especiall the B models like mine (C2859B):</p>
<blockquote>
<p>Works &quot;almost&quot; - RGB data are sent to the plotter. The CMYK mode of the &quot;B&quot;-Models and the 75x Series is not supported. Images are somewhat &quot;greenish&quot;; gray is composed of CMY. Nice for really LARGE printouts ... (ISO A0).</p>
</blockquote>
<p>These are the options I've thought of to print to the printer:</p>
<ul>
<li>
<p>Using the suggested GhostScript printer driver <code>dnj650c</code>. This driver prints blacks using black ink but grays are a mix of colored inks since it uses RGB instead of CMYK colors.</p>
</li>
<li>
<p>Using a Gutenprint <a href="https://github.com/koenkooi/gutenprint/blob/master/src/main/print-pcl.c#L740">driver</a>.</p>
<p>After installing Gutenprint drivers with <code>dnf install gutenprint-cups</code>, I configured a new printer using the HP DesignJet 750C driver it provides. Test pages produce blacks and grays that are made from a combination of color inks instead of black ink.</p>
</li>
<li>
<p>Using the HP Windows <a href="https://www.myolddesignjet.com/drivers.html">driver</a>, which utilizes a similar approach to GhostScript -- converting the job to raster format before sending it in chunks to the printer. From the Windows NT VM, I can expose the printer via <code>lpd</code> and configure that backend in CUPS.</p>
</li>
<li>
<p>Using an inbuilt PostScript interpreter, which is only available on the more expensive /PS models, or via an expansion ROM. I was able to find a C3545-60101 PostScript ROM SIMM, and installing it allowed me to <code>nc designjet.home.arpa 9100 &lt; test.ps</code> and see the printed result. The chip goes in the second slot from the top, under the panel which contains the RAM.</p>
</li>
</ul>
<p>In these expriments, I used the CUPS setup wizard to create a new printer (and upload a PPD), then copied the Avahi service file and pointed it at the new IPP endpoint with a new UUID and name.</p>
<p>For the last two approaches, I need a PPD file (and possibly an ICC color profile) which I can configure in CUPS. HP originally provided these, they mention a v2.0 file which enabled <em>long axis printing</em> for banners, but it's no longer available from their website. I reached out to <a href="http://www.davidmaudlin.com/">David Mauldlin</a> based on an <a href="https://community.graphisoft.com/t5/Installation-update/Plotter-recommendation-for-Mac-OSX/td-p/197061">old Graphisoft post</a> of his, hoping he might have had the PostScript module and a PPD — sadly this wasn't the case, but he helpfully provided me with a copy of the <a href="/resources/pdfs/hp-designjet-650c-service-manual.pdf">Service Manual</a>.</p>
<p>The PPD 4.3 format is documented in <a href="/resources/pdfs/adobe-ppd-spec.pdf">Adobe Tech Note #5003</a>, with extensions for Foomatic and CUPS documented in the <a href="https://refspecs.linuxfoundation.org/LSB_4.0.0/LSB-Printing/LSB-Printing/ppdext.html">Linux Standard Base Printing Specification 4.0</a> and in the <a href="https://www.cups.org/doc/postscript-driver.html">CUPS Documentation</a>. A PPD for an HP DesignJet should take advantage of <a href="https://support.hp.com/rs-en/document/bpp01888">embedded page size</a>, e.g. <code>&lt;&lt;/PageSize [612 792]&gt;&gt;setpagedevice</code> within the PostScript document<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>, to avoid:</p>
<blockquote>
<p>Without an embedded page size in a PostScript file, PostScript-capable HP DesignJet printers default to a paper size that is the width of the roll multiplied by 1.5. For example, if the printer had a 36-inch roll, the printer would feed and cut the paper at 54 inches (36 x 1.5), regardless of the size of the image on the paper.</p>
</blockquote>
<p>I've witnessed this behavior when printing a <a href="http://users.fred.net/tds/lab/postscript.html">test PostScript file</a>, so add <code>&lt;&lt;/PageSize [612 792]&gt;&gt;setpagedevice</code> (for Letter) to avoid wasting paper. This can be done automatically via PPD file.</p>
<p>I've successfully printed a PDF through CUPS through Windows NT:</p>
<ul>
<li>using a custom <a href="/resources/ppds/designjet-nt.ppd">PPD file</a>, which is essentially a generic PostScript PPD combined with some additional page size options,</li>
<li>and by configuring CUPS against Windows NT using the <code>lpd</code> backend (<code>lpd://nt.home.arpa/DESIGNJET</code>),</li>
<li>where <code>DESIGNJET</code> is the share name (under Control Panel, then Printers, then the HP Designjet, then the Printer menu, then Properties, then the Sharing tab)</li>
<li>and the Microsoft TCP/IP Printing service is installed (under Control Panel, then Network, then the Services tab).</li>
</ul>
<p>I believe Windows NT is simply piping the PostScript file it receives from the network directly to the printer, instead of running the PostScript file it receives through its own driver which would use RTL to speak to the printer. This <a href="https://learn.microsoft.com/en-us/windows-hardware/drivers/print/client-side-rendering">documentation</a> sheds some light on the situation:</p>
<blockquote>
<p>Before Windows 2000, Windows rendered print jobs on the client computer and the rendered data was sent to the print server for printing. Since Windows 2000 and before Windows Vista, print-job rendering took place on the print server. Print-job rendering was moved to print server beginning with Windows 2000 because print servers offered more processing power than the client computers. The more powerful print servers could then complete the processor-intensive task of print-job rendering.</p>
</blockquote>
<p>So to take advantage of the official HP driver, I'll need to spin up a Windows 2000 VM.</p>
<p>The CUPS debug log complains:</p>
<pre><code>[Job 178] No resolution information found in the PPD file.
[Job 178] Using image rendering resolution 300 dpi
</code></pre>
<p>I'm not yet sure how to add this info, which is 300x300 for color but 600x600 for black and white.</p>
<p>Using this same PPD file, I can print directly to the printer and utilize the PostScript ROM. Both printing through Windows NT and directly via PostScript yield a test page which uses black ink for black, but I ran out of paper before more testing could be done so I'm uncertain whether all colors are constructed in the proper CMYK color space.</p>
<h1 id="a-short-history">A Short History</h1>
<p><a href="/resources/pdfs/adobe-postscript-language-reference.pdf">PostScript</a> was created as a way for an application on a workstation to describe how to print a document in a standard way such than any PostScript printer could render it. It was created by John Warnock and others at the Xerox Palo Alto Research Center (PARC) as InterPress, where laser printing originated. See <a href="/posts/2023-07-27-interpress">PostScript and Interpres: A Comparison</a> for an in-depth history of PostScript and Interpress. PostScript also defined a high-end proprietary font format, Type 1<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>. In 2022, The Computer History Museum has published the PostScript v.10 <a href="https://computerhistory.org/blog/postscript-a-digital-printing-press/">source code</a>, from 1983.</p>
<p>PostScript is not a <em>raster</em> format, it's a stack-based interpreted program (sometimes described as <a href="/resources/pdfs/thinking-forth.pdf">Forth</a>-like), originally the interpreter ran on the printer itself, which produced a raster image at the correct resolution and size for printing on the printer on the chosen media. For Steve Job's unveiling of the Apple LaserWriter in 1984, an expensive workgroup laser printer which used PostScript (and had the most powerful CPU of the Apple line-up at the time, a Motorola 68000 at 12MHz<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>), he printed a IRS tax form in twenty seconds. To get the form's PostScript program to run that fast, Adobe founder John Warnock applied optimizations like loop unrolling in a program that become Acrobat Distiller<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, as described in <a href="/resources/pdfs/warnock-on-pdf.pdf">Warnock on PDF: Its Past, Present and Future</a>. The LaserWriter was so important to Adobe, that my first edition copy of the PostScript Language Reference<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> from 1985 advertises &quot;Includes Detailed Programming Information on the Laser Writer&quot; on the cover. These optimizations form part of <a href="/resources/pdfs/adobe-camelot.pdf">Project Camelot</a>, which became PDF:</p>
<blockquote>
<p>This project's goal is to solve a fundamental problem that confronts today's companies. The problem is concerned with our ability to communicate visual material between different computer applications and systems. The specific problem is that most programs print to a wide range of printers, but there is no universal way to communicate and view this printed information electronically. The popularity of FAX machines has given us a way to send images around to produce remote paper, but the lack of quality, the high communication bandwidth and the device specific nature of FAX has made the solution less than desirable. What industries badly need is a universal way to communicate documents across a wide variety of machine configurations, operating systems and communication networks. These documents should be viewable on any display and should be printable on any modern printers. If this problem can be solved, then the fundamental way people work will change.</p>
</blockquote>
<p>In a modern setup, the printer no longer interprets PostScript itself, instead its successor format PDF is received by our print sever CUPS which utilizes <a href="https://www.cups.org/doc/man-filter.html">filters</a> to transform it into a format the printer can understand. Although CUPS can print to a PostScript printer, using the printer's built-in PostScript interpreter means being limited by its likely small amount of onboard memory and version of PostScript. For instance, the DesignJet can be expanded to 68MB of RAM, but when printing using carefully crafted HP Raster Transfer Language, it only needs to hold a few lines of image at a time as it prints. In my case I use:</p>
<ul>
<li><a href="https://developers.hp.com/hp-linux-imaging-and-printing"><code>hplip</code></a>, which is provided by HP and includes a staggering number of drivers, for the HP LaserJet.</li>
<li><a href="https://openprinting.github.io/projects/02-foomatic/">Foomatic</a>, which provides PostScript Printer Description (PPD) files for many printers, for the HP DesignJet since it is too old to be included in <code>hplip</code>.</li>
</ul>
<p>The <a href="https://www.openprinting.org/download/kpfeifle/LinuxKongress2002/Tutorial/IV.Foomatic-Developer/IV.tutorial-handout-foomatic-development.html">History of Foomatic</a> discusses CUPS' 1999 release, and the <code>pstoraster</code> filter which utilizes <a href="https://www.ghostscript.com/">GhostScript</a> (an open PostScript interpreter and set of printer drivers) to convert PostScript to a raster format that printers can understand. In the case of the HP DesignJet 650C, the <a href="https://git.ghostscript.com/?p=ghostpdl.git;a=blob;f=devices/gdevcdj.c;h=8515ddebe9805fccb16186ad361370561ba6eafb;hb=refs/heads/master">driver</a> is present in the initial 1998 <a href="https://git.ghostscript.com/?p=ghostpdl.git;a=commit;f=devices/gdevcdj.c;h=eec0ef527f18c5978c4476c9490f4de4c4249628">commit</a>. Foomatic provides a database of PPDs, which describe details of the printer including: what GhostScript driver to use, printer dialogue options, paper sizes, etc.</p>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>This syntax is fine since the PostScript ROM supports Level 2, which introduced the <code>&lt;&lt; ... &gt;&gt;</code> syntax for dictionaries, but the <a href="/resources/pdfs/adobe-ppd-spec.pdf">tech note</a> mentions on page 30:</p>
<blockquote>
<p>To further the aim of printing whenever possible, even when Level 2 code is sent to a Level 1 device, the following recommendations should be followed when building a PPD file.</p>
<ul>
<li>
<p>Do not use the Level 2 dictionary syntax symbols <code>&lt;&lt;</code> and <code>&gt;&gt;</code> directly in invocation code when constructing dictionaries. Doing so will cause a <code>syntaxerror</code> if this code is re-directed to a Level 1 device. Such a <code>syntaxerror</code> cannot be trapped in a <code>stopped</code> context by a print manager. The two alterna- tives are to use the more verbose Level 1 method:</p>
<pre><code>N dict
dup /name1 value1 put
dup /name2 value2 put
...
dup /nameN valueN put
</code></pre>
<p>or to put the more efficient Level 2 method into an executable string:</p>
<pre><code>(&lt;&lt;) cvx exec /name1 value1 /name2 value2
...
/nameN valueN (&gt;&gt;) cvx exec
</code></pre>
<p>This second method will avoid the <code>syntaxerror</code> described above. It will consume a tiny amount of VM, which will be restored by automatic garbage collection on a Level 2 device.</p>
</li>
</ul>
</blockquote>
<p>The generic PPD bundled with macOS uses this style:</p>
<pre><code>*PageSize Letter/US Letter: &quot;2 dict dup /PageSize [612 792] put dup /ImagingBBox null put setpagedevice&quot;
</code></pre>
<p>On my system this file is available at:</p>
<pre><code>/System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/PrintCore.framework/Versions/A/Resources/Generic.ppd
</code></pre>
&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></li>
<li id="fn:2">
<p>Described <a href="/resources/pdfs/adobe-type-1.pdf">Adobe Type 1 Font Format</a>. Adobe later competed with Apple and Microsoft's TrueType fonts in the <a href="/resources/pdfs/font-wars.pdf">Font Wars</a>, which ended with Microsoft and Adobe creating OpenType fonts, and Adobe <a href="https://helpx.adobe.com/fonts/kb/postscript-type-1-fonts-end-of-support.html">ending support</a> for Type 1 fonts as of 2023.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p><a href="https://support.apple.com/kb/sp472?locale=en_US">Apple's LaserWriter: Technical Specifications</a>. Apart from PostScript, the printer also supported the Diablo 630 language, <a href="https://www.poota.com/lpbook/05-chp5.html">Chapter 5 of <em>A Laser Printing Book</em> by Steven Burrows</a> says this:</p>
<blockquote>
<p>The Xerox Diablo 630 daisywheel printer was for many years the industry standard letter-quality printer for business correspondence, and was widely emulated by other printer manufacturers. As a daisywheel printer the Diablo 630 could not print graphics, and had few font selection capabilities, but it is a useful emulation on laser printers which may be used with very old word-processing software.</p>
</blockquote>
<p>Below is an ad from a <a href="https://books.google.com/books?id=NxcYP0D6EBsC&amp;pg=RA1-PA10">1981 edition of Computerworld</a>:</p>
<figure>
<img src="/resources/images/2023-07-27-hp-designjet-650c/diablo-630-ad.jpg" alt="Diablo 630 Ad" />
<figcaption>Diablo 630 Ad</figcaption>
</figure>
<p>Plastic <em>and</em> metal daisy wheels? Sign me up. A <a href="https://books.google.com/books?id=3j4EAAAAMBAJ&amp;pg=PA50">1981 edition of InfoWorld</a> says the price was $2495, or around $8,000 adjusted for inflation.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>In <a href="https://blog.adobe.com/en/publish/2018/06/14/evolution-digital-document-celebrating-adobe-acrobats-25th-anniversary"><em>Evolution of the Digital Document: Celebrating Adobe Acrobat’s 25th Anniversary</em></a>, Bryan Lamkin writes</p>
<blockquote>
<p>In 1985, he created a new PostScript graphics program (which would later become Acrobat Distiller) and used it to re-code an old federal tax return form. When Steve Jobs unveiled the Apple LaserWriter that year, one of the documents he printed out on stage was John’s 1040 tax form. With Apple on board, Adobe helped launch the desktop publishing revolution.</p>
</blockquote>
<p>The validity of this claim is clouded by the fact that according to Warnock it wasn't &quot;an old federal tax return form,&quot; but a form he had hand-coded in PostScript.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>Colloquially known as the &quot;red book&quot;, there is also the <a href="/resources/pdfs/adobe-blue-book.pdf">PostScript Language Tutorial and Cookbook</a> known as the &quot;blue book&quot;, and the <a href="/resources/pdfs/adobe-green-book.pdf">PostScript Language Program Design</a> or &quot;green book&quot;. The PostScript Language Reference has three editions, there are PDFs of the <a href="/resources/pdfs/adobe-red-book-v2.pdf">second</a> and <a href="/resources/pdfs/adobe-red-book-v3.pdf">third</a>, the third edition being available directly from <a href="https://www.adobe.com/jp/print/postscript/pdfs/PLRM.pdf">Adobe</a>. The first edition is avialable to borrow on <a href="https://archive.org/details/postscriptlangua00adob">archive.org</a>. Fermilab maintained a copy of all of these documents, and they are now available only on the <a href="https://web.archive.org/web/20071214091155/http://www-cdf.fnal.gov/offline/PostScript/">Wayback Machine</a>, unfortunately <code>PLRM1.PDF</code> is actually a copy of the second edition. That folder contains some neat PostScript example programs, such as one to generate a fractal fern.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-06-08-airprint-with-cups</id>
    <title>AirPrint with CUPS</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2023-06-08-airprint-with-cups" />
    <published>2023-06-08T00:00:00-05:00</published>
    <summary>How to print from your iPhone to HP printers from .</summary>
    
    <media:content url="https://connor.zip/resources/images/2023-06-08-airprint-with-cups/hp-4100n.jpg" medium="image" width="800" height="533"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Using Linux and other free software, it's possible to revive an older but perfectly functioning printer as an AirPrint printer, allowing simple <a href="https://openprinting.github.io/driverless/">driverless printing</a> from your iPhone and other devices. This article outlines a solution using the following:</p>
<ul>
<li>A Linux distro, I use <a href="https://fedoraproject.org/">Fedora</a> release 35</li>
<li>Common UNIX Printing System or <a href="https://www.cups.org/">CUPS</a> for network printing</li>
<li><a href="https://www.avahi.org/">Avahi</a> for multicast DNS (mDNS) or DNS Service Discovery (DNS-SD) aka Bonjour</li>
</ul>
<p>In this article we connect the print server to the printer over the network, which requires a printer with a network interface possibly via a network card. A print server is any physical or virtual machine running Linux with CUPS and Avahi installed, in my case it's a VM running on VMWare on an HP ProLiant DL380G7 rack server. The printer can also be physically connected to the print server through USB, serial, parallel, or any other mechanism supported by CUPS. It could even be an authenticating IPP print service being accessed over the Internet.</p>
<p>Printing from your iPhone utilizes AirPrint, an Apple technology built on several standards: Internet Printing Protocol (IPP) and mDNS. IPP, controlled by the <a href="https://pwg.org/">Printer Working Group</a> provides a standard way to send jobs to a printer, track jobs, receive errors, etc; Multicast DNS (mDNS) or DNS Service Discover (DNS-SD), also called zero configuration networking or ZeroConf<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>, is a way to advertise services on the local network alongside capabilities information. <a href="https://www.pwg.org/ipp/everywhere.html">IPP Everywhere</a> is an open standard which works similarly to AirPrint, and is compatible with it.</p>
<p>Installing Linux on your chosen target system is out of scope for this article, in my case I uploaded an ISO to VMWare and booted a new VM with it as the CD-ROM, once installed I did some housekeeping:</p>
<pre><code class="language-sh"># Connecting over SSH as my user
; mkdir -p .ssh
; chmod 700 .ssh/
; cd .ssh/
# Add my SSH public key so I can login without a password
; echo 'my-public-key' &gt;&gt; authorized_keys
; chmod 600 authorized_keys
# Allow VMWare better interop with the guest
; sudo dnf install open-vm-tools
</code></pre>
<h2 id="goal">Goal</h2>
<p>Our goal print job flow is the following:</p>
<figure class="graphviz">
<svg width="583pt" height="170pt" viewBox="0.00 0.00 583.25 170.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 166)"><polygon fill="white" stroke="none" points="-4,4 -4,-166 579.25,-166 579.25,4 -4,4"/><!-- iphone --><g id="node1" class="node"><title>iphone</title><polygon fill="none" stroke="black" points="80.25,-108 0,-108 0,-54 80.25,-54 80.25,-108"/><text text-anchor="middle" x="40.12" y="-76.7" font-family="Times,serif" font-size="14.00">iPhone</text></g><!-- vm --><g id="node2" class="node"><title>vm</title><polygon fill="none" stroke="black" points="314.25,-117 181.5,-117 181.5,-45 314.25,-45 314.25,-117"/><text text-anchor="middle" x="247.88" y="-85.7" font-family="Times,serif" font-size="14.00">Print Server VM</text>
<text text-anchor="middle" x="247.88" y="-67.7" font-family="Times,serif" font-size="14.00">Avahi + CUPS</text>
</g>
<!-- iphone&#45;&gt;vm -->
<g id="edge1" class="edge">
<title>iphone&#45;&gt;vm</title>
<path fill="none" stroke="black" d="M80.31,-81C105.54,-81 139.19,-81 169.64,-81"/>
<polygon fill="black" stroke="black" points="169.61,-84.5 179.61,-81 169.61,-77.5 169.61,-84.5"/>
<text text-anchor="middle" x="130.88" y="-85.7" font-family="Times,serif" font-size="14.00">mDNS/IPP</text>
</g>
<!-- hp4100n -->
<g id="node3" class="node">
<title>hp4100n</title>
<polygon fill="none" stroke="black" points="573.38,-162 419.62,-162 419.62,-90 573.38,-90 573.38,-162"/>
<text text-anchor="middle" x="496.5" y="-130.7" font-family="Times,serif" font-size="14.00">HP LaerJet 4100N</text>
<text text-anchor="middle" x="496.5" y="-112.7" font-family="Times,serif" font-size="14.00">(EIO network card)</text>
</g>
<!-- vm&#45;&gt;hp4100n -->
<g id="edge2" class="edge">
<title>vm&#45;&gt;hp4100n</title>
<path fill="none" stroke="black" d="M314.64,-93C343.36,-98.24 377.45,-104.46 408.25,-110.08"/>
<polygon fill="black" stroke="black" points="407.48,-113.5 417.95,-111.85 408.74,-106.61 407.48,-113.5"/>
<text text-anchor="middle" x="366" y="-112.23" font-family="Times,serif" font-size="14.00">AppSocket</text>
</g>
<!-- hp650c -->
<g id="node4" class="node">
<title>hp650c</title>
<polygon fill="none" stroke="black" points="575.25,-72 417.75,-72 417.75,0 575.25,0 575.25,-72"/>
<text text-anchor="middle" x="496.5" y="-40.7" font-family="Times,serif" font-size="14.00">HP DesignJet 650C</text>
<text text-anchor="middle" x="496.5" y="-22.7" font-family="Times,serif" font-size="14.00">(MIO network card)</text>
</g>
<!-- vm&#45;&gt;hp650c -->
<g id="edge3" class="edge">
<title>vm&#45;&gt;hp650c</title>
<path fill="none" stroke="black" d="M314.64,-69C342.86,-63.85 376.25,-57.76 406.61,-52.22"/>
<polygon fill="black" stroke="black" points="406.95,-55.71 416.16,-50.48 405.7,-48.83 406.95,-55.71"/>
<text text-anchor="middle" x="366" y="-69.86" font-family="Times,serif" font-size="14.00">AppSocket</text>
</g>
</g>
</svg>
</figure>
<p>In more detail:</p>
<ul>
<li>Avahi will send multicast UDP packets which are received by every device on the local network. These packets contain the <code>SRV</code> records which define our IPP Everywhere (AirPrint) service and include information like the IP and port of the Internet Printing Protocol (IPP) service CUPS provides, the capabilities of our printer, etc.</li>
<li>Our iPhone receives this <code>SRV</code> record and updates its database of local services. When we open a print dialogue, we can choose from these advertised IPP services presented as printers.</li>
<li>When we print a document, our iPhone sends the PDF or other common raster format over IPP to our CUPS server, along with metadata such as single-sided or two-sided printing, etc. CUPS returns a job id for the submission and our iPhone can check the status of the job and report any errors to us via IPP.</li>
<li>CUPS transforms the job from PDF or common raster format, through filters, into a format that the printer driver can understand such as PostScript. The driver then converts this format to a language the printer can understand such as HP Printer Command Language (PCL) or HP Raster Transfer Language.</li>
<li>This final (usually raster) format is sent to the printer over the network via AppSocket (TCP on port 9100) to the printer for processing. The printer may report errors or a successful print, which CUPS will report via IPP when our iPhone checks on the job.</li>
</ul>
<p><em>AppSocket</em> is a protocol developed by <a href="http://kestrel.nmt.edu/~raymond/software/lprng-doc/LPRng-Reference-Multipart/appsocket.htm">Tektronix</a> in which a driver can talk to a printer over a TCP connection to port 9100 on the printer's network interface -- the protocol is also called JetDirect (because of its usage with JetDirect cards) or RAW. The HP printer listens for commands from network connections as if it were directly connected to a PC over serial or parallel, supporting the same formats such as Adobe PostScript, HP Printer Command Language (PCL), or its subset HP Raster Transfer Language (RTL).</p>
<p>It is imperitive that the printer's network interface not be accessible from the Internet. Access to the printer network interface should be limited to the local network, and preferably only accessible by the print server via VLAN or physical network partition. Since it has no authentication mechanism, AppSocket will accept anything sent to it and print it. You can navigate to your printer's IP on port 9100, e.g. <code>http://printer.home.arpa:9100</code>, and the printer will print out the incoming HTTP request as if it were a print job.</p>
<h2 id="installing-and-configuring">Installing and Configuring</h2>
<p>To install CUPS and Avahi, you should be able to use your distro's package manager:</p>
<pre><code class="language-sh">; sudo dnf install cups

# These may be unnecessary
; sudo systemctl enable cups --now
; sudo systemctl enable avahi --now
</code></pre>
<p>The <code>cups</code> package installs Avahi through its dependency on <code>cups-browsed</code> on Fedora, but may not on all systems.</p>
<p>If you use a firewall, you need to poke some holes to allow your devices to communicate with CUPS over your local network:</p>
<pre><code class="language-sh">; sudo firewall-cmd --zone=public --add-service ipp --permanent
; sudo firewall-cmd --zone=public --add-service mdns --permanent
; sudo firewall-cmd --reload
</code></pre>
<p>Now IPP is one of the allowed services:</p>
<pre><code class="language-sh">; sudo firewall-cmd --list-services
... ipp mdns ...
</code></pre>
<p>If you're using an HP printer, you'll want to install <code>hplip</code> so that CUPS can use those drivers:</p>
<pre><code class="language-sh">; sudo dnf install hplip
</code></pre>
<h3 id="configuring-cups">Configuring CUPS</h3>
<p>To configure CUPS to accept traffic from other network devices, we need to edit <code>/etc/cups/cupsd.conf</code>, by replacing the <code>Listen</code> directive for localhost with a general <code>Port</code> directive:</p>
<pre><code>#Listen localhost:631
Port 631
</code></pre>
<p>To allow <em>all</em> (even unauthenticated admin) access from your local network to the Web UI, add this to <code>/etc/cups/cupsd.conf</code> (more information at <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/configuring_and_using_a_cups_printing_server/index#configuring-printing_configuring-and-using-a-cups-printing-server">Red Hat</a> and <a href="https://www.cups.org/doc/man-cupsd.conf.html"><code>man cupsd.conf</code></a>):</p>
<pre><code>LogLevel info
MaxLogSize 1m
ErrorPolicy stop-printer
ServerAlias *
# Allow remote access
Port 631
Listen /run/cups/cups.sock
WebInterface Yes
IdleExitTimeout 60
&lt;Location /&gt;
  Allow @LOCAL
&lt;/Location&gt;
</code></pre>
<p>A <code>LogLevel</code> of <code>info</code> will give you more information on when the Web UI or IPP service is being used. The <code>Location</code> block is required to allow printing from the local network. Omitting any <code>Policy</code> blocks seems to allow any user to access the admin Web UI without authentication. Disabling <code>Browsing</code> means CUPS won't produce mDNS entries of its own, instead we'll do that manually so they work with AirPrint. You'll need to restart CUPS to see config changes reflected:</p>
<pre><code class="language-sh">; sudo systemctl restart cups.service
</code></pre>
<p>At this point you'll want to configure your printer in CUPS through the Web UI:</p>
<ul>
<li>if you have an older printer and will be using &quot;classic drivers&quot;, follow <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/configuring_and_using_a_cups_printing_server/index#adding-a-printer-with-a-classic-driver-cups-web-ui_configuring-printing">these directions</a>,</li>
<li>for a more modern printer which supports <em>driverless printing</em> follow <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/configuring_and_using_a_cups_printing_server/index#configuring-driverless-printing_configuring-printing">these directions</a>.</li>
</ul>
<p>Remember to check the share printer option, otherwise CUPS will restrict access to the printer.</p>
<p>You can also add printers via the <code>/etc/cups/printers.conf</code> file <em>while <code>cupsd</code> is stopped</em>, if you also add a corresponding <code>/etc/cups/ppd</code> PPD file of the same name. Editing the <code>printers.conf</code> file manually is not recommended, and the options available are undocumented<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>. See this incomplete <a href="https://opensource.apple.com/source/cups/cups-136.9/cups/doc/help/ref-printers-conf.html">documentation</a> from Apple for moree info on each option. Below is mine:</p>
<pre><code>NextPrinterId 3

&lt;DefaultPrinter printer&gt;
PrinterId 1
UUID urn:uuid:fb026ac7-8613-3eb4-5b51-9749073f7704
AuthInfoRequired none
Info HP LaserJet 4100N
Location Office
MakeModel HP LaserJet 4100 Series hpijs pcl3, 3.21.2
DeviceURI socket://printer.home.arpa
State Idle
StateTime 1686858786
ConfigTime 1680818482
Type 8425500
Accepting Yes
Shared Yes
JobSheets none none
QuotaPeriod 0
PageLimit 0
KLimit 0
OpPolicy default
ErrorPolicy stop-printer
Attribute marker-colors none
Attribute marker-levels 27
Attribute marker-names Toner Cartridge HP C8061X
Attribute marker-types toner
Attribute marker-change-time 1686858786
&lt;/DefaultPrinter&gt;

&lt;Printer designjet&gt;
PrinterId 2
UUID urn:uuid:b691f515-4a45-35da-5707-debfda29d917
Info HP DesignJet 650C
Location Office
MakeModel HP DesignJet 650C Foomatic/dnj650c (recommended)
DeviceURI socket://designjet.home.arpa
State Idle
StateTime 1683579922
ConfigTime 1671595356
Type 8450060
Accepting Yes
Shared Yes
JobSheets none none
QuotaPeriod 0
PageLimit 0
KLimit 0
OpPolicy default
ErrorPolicy stop-printer
&lt;/Printer&gt;
</code></pre>
<p>Both printers I have configured are communicating via the AppSocket protocol via <code>socket://</code> aka TCP port 9100 over the local network. I use <a href="https://www.pfsense.org/">pfSense</a> running on a VM as my home router, which provides DNS resolution via <code>.home.arpa</code>, the <a href="https://www.rfc-editor.org/rfc/rfc8375.html">recommended</a> home DNS suffix, which is why they aren't IP addresses. I also recommend configuring your router to assign static IPs (through DHCP) to network devices like printers so that configurations like this can use the IP address. Alternatively, if your printer or network card is sophisticated enough (like the later EIO careds), you can use an <a href="https://www.cups.org/doc/network.html#AUTOMATIC">mDNS address</a> with a URI like <code>dnssd://HP-LaserJet-4100n._pdl-datastream._tcp.local</code> (optionally with <code>?uuid=</code>). The problem with using mDNS for this is that now your printer will be advertising itself and will show up in the Mac print dialogue if it advertises <code>_ipp._tcp.</code>, so I disable mDNS on the printer itself.</p>
<p>You will also need a PPD under <code>/etc/cups/ppd</code> that is either created via the CUPS UI wizard or uploaded manually (e.g. when fetched from <a href="https://www.openprinting.org/printer/HP/HP-DesignJet_650C">openprinting.org</a>). CUPS PPD files support <a href="https://www.cups.org/doc/spec-ppd.html">extensions</a> atop <a href="/resources/pdfs/adobe-ppd-spec.pdf">Adobe PPD version 4.3</a><sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>.</p>
<p>CUPS will eventually <a href="https://github.com/OpenPrinting/cups-sharing/issues/4">deprecate</a> using printer drivers with PPDs, in favor of using a Printer Application (as described in <a href="https://openprinting.github.io/documentation/01-printer-application/"><em>A New Way to Print in Linux</em></a>) or <a href="https://github.com/michaelrsweet/pappl">PAPPL</a>. A PAPPL-based app is one that presents an IPP interface which accepts PDFs and then converts them (the same way that CUPS does internally) to the custom or raster format of the actual printer, and sends it over the appropriate interface. There are already printer apps for HP printers that use <code>hplip</code> (such as my HP LaserJet 4100n), <a href="https://github.com/michaelrsweet/hp-printer-app"><code>hp-printer-app</code></a>, for printers that use a GhostScript driver (such as my HP DesignJet 650c), <a href="https://github.com/OpenPrinting/ghostscript-printer-app"><code>ghostscript-printer-app</code></a>, and for printers that use a GutenPrint driver, <a href="https://github.com/OpenPrinting/gutenprint-printer-app"><code>gutenprint-printer-app</code></a>. I've yet to migrate to these printer apps but plan to do so, I've run into the following issues:</p>
<ul>
<li>Through <code>snap</code>, I can't configure <code>hp-printer-app</code> to consider its hostname <code>hp-printer-app.home.arpa</code>. This means the UI can be available at a custom local domain and proxied via nginx, but links will redirect to <code>localhost:8000</code>.</li>
<li>Once I make <code>:8000</code> available via firewall and add an Avahi service file to advertise AirPrint, printing via it fails with <code>Get-Jobs client-error-not-found (printer-uri ipp://misc.local.:8000/printers/HP_LaserJet_4100n not found.)</code>. I need to know how to construct the printer URI for the <code>rp</code> field in the service definition.</li>
</ul>
<p>After the printer is configured, ensure you print a test page to ensure CUPS can communicate to the printer and the chosen driver works with your model.</p>
<h3 id="configuring-avahi">Configuring Avahi</h3>
<p>Once that's done, you'll need to generate your Avahi service config. It's not difficult to write on manually, the below is what I use for my HP LaserJet and is at <code>/etc/avahi/services/AirPrint-printer.service</code>:</p>
<pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; ?&gt;
&lt;!DOCTYPE service-group  SYSTEM 'avahi-service.dtd'&gt;
&lt;service-group&gt;
	&lt;name replace-wildcards=&quot;yes&quot;&gt;HP LaserJet 4100N&lt;/name&gt;
	&lt;service&gt;
		&lt;type&gt;_ipp._tcp&lt;/type&gt;
		&lt;subtype&gt;_universal._sub._ipp._tcp&lt;/subtype&gt;
		&lt;port&gt;631&lt;/port&gt;
		&lt;txt-record&gt;txtvers=1&lt;/txt-record&gt;
		&lt;txt-record&gt;qtotal=1&lt;/txt-record&gt;
		&lt;txt-record&gt;UUID=AFEB3961-F48A-499A-B6C5-605A723CECF4&lt;/txt-record&gt;
		&lt;txt-record&gt;Transparent=T&lt;/txt-record&gt;
		&lt;txt-record&gt;Binary=T&lt;/txt-record&gt;
		&lt;txt-record&gt;TBCP=T&lt;/txt-record&gt;
		&lt;txt-record&gt;kind=document,envelope&lt;/txt-record&gt;
		&lt;txt-record&gt;Duplex=T&lt;/txt-record&gt;
		&lt;txt-record&gt;URF=DM3&lt;/txt-record&gt;
		&lt;txt-record&gt;rp=printers/printer&lt;/txt-record&gt;
		&lt;txt-record&gt;note=Office&lt;/txt-record&gt;
		&lt;txt-record&gt;product=(GPL Ghostscript)&lt;/txt-record&gt;
		&lt;txt-record&gt;printer-state=3&lt;/txt-record&gt;
		&lt;txt-record&gt;printer-type=0x409014&lt;/txt-record&gt;
		&lt;txt-record&gt;pdl=application/octet-stream,application/pdf,application/postscript,application/vnd.cups-raster,image/gif,image/jpeg,image/png,image/tiff,image/urf,text/html,text/plain,application/vnd.adobe-reader-postscript,application/vnd.cups-pdf&lt;/txt-record&gt;
	&lt;/service&gt;
&lt;/service-group&gt;
</code></pre>
<p>You can also use the <a href="https://raw.githubusercontent.com/tjfontaine/airprint-generate/master/airprint-generate.py"><code>airprint-generate.py</code></a> script by Timothy Fontaine. To get <code>airprint-generate.py</code> running, you'll need to install some dependencies:</p>
<pre><code class="language-sh">; sudo dnf install pip gcc clang libpython python3-devel cups-devel
; pip3 install wheel pycups
</code></pre>
<p>Then run the script and copy the output to <code>/etc/avahi/services/</code>.</p>
<p>The <code>pdl</code> must include <code>image/urf</code> to work with AirPrint, from <a href="https://support.apple.com/guide/deployment/airprint-payload-settings-dep3b4cf515/web#dep253619ad9:~:text=AirPrint%20devices%20don%E2%80%99t%20browse%20for%20all%20IPP%20printers%E2%80%94they%20browse%20only%20for%20the%20subset%20of%20IPP%20printers%20that%20support%20Universal%20Raster%20Format%20(URF)">Apple's documentation</a>:</p>
<blockquote>
<p>AirPrint devices don’t browse for all IPP printers—they browse only for the subset of IPP printers that support Universal Raster Format (URF).</p>
</blockquote>
<p>The service for my HP DesignJet is very similar, but I made some changes (some necessary, some for curiosity's sake). I'll illustrate the changes I made:</p>
<p>I replaced the name with</p>
<pre><code class="language-xml">	&lt;name replace-wildcards=&quot;yes&quot;&gt;HP DesignJet 650C&lt;/name&gt;
</code></pre>
<p>The UUID with</p>
<pre><code class="language-xml">		&lt;txt-record&gt;UUID=3C84F9A5-A870-475F-A758-43DDF9B290EC&lt;/txt-record&gt;
</code></pre>
<p>Added a <code>TLS</code> record, changed <code>kind</code> to reflect a large format printer, toggled <code>Duplex</code> off, toggled <code>Color</code> on, and changed <code>PaperMax</code> and <code>PaperCustom</code> to illustrate that the printer can print to large formats:</p>
<pre><code class="language-xml">		&lt;txt-record&gt;TLS=1.2&lt;/txt-record&gt;
		&lt;txt-record&gt;kind=large-format,roll&lt;/txt-record&gt;
		&lt;txt-record&gt;Color=T&lt;/txt-record&gt;
		&lt;txt-record&gt;PaperMax=&gt;isoC-A2&lt;/txt-record&gt;
		&lt;txt-record&gt;PaperCustom=T&lt;/txt-record&gt;
</code></pre>
<p>Changed <code>rp</code> to the printer name, added a <code>ty</code> field:</p>
<pre><code class="language-xml">		&lt;txt-record&gt;rp=printers/designjet&lt;/txt-record&gt;
		&lt;txt-record&gt;ty=HP DesignJet 650C&lt;/txt-record&gt;
</code></pre>
<p>and omitted the <code>printer-state</code> and <code>printer-type</code> fields.</p>
<p>If you have a Mac, you can use the <a href="https://apps.apple.com/us/app/discovery-dns-sd-browser/id1381004916?mt=12">Discovery</a> app to see mDNS services available on the local network; which is useful for debugging as this is what your iPhone sees when it searches for nearby AirPrint-enabled printers.</p>
<figure>
<img src="/resources/images/2023-06-08-airprint-with-cups/discovery.png" alt="Discovery app showing mDNS services on my local network" />
<figcaption>Discovery app showing mDNS services on my local network</figcaption>
</figure>
<p>The important services are under <code>_ipp._tcp.</code>, you should see one per Avahi service file. Expanding a service should display each <code>txt-record</code> option as a row.</p>
<p>If <em>Printer Sharing</em> is enabled in CUPS and Avahi is installed, an IPP service will be advertised via mDNS but it won't have the necessary TXT records for AirPrint which indicate what capabilities the printer has. You should disable Printer Sharing to avoid duplicate printers (one driver-less, one conventional Bonjour) showing on a Mac. More information on each TXT record is available in the <a href="https://developer.apple.com/bonjour/printing-specification/bonjourprinting-1.2.1.pdf">Bonjour Printing</a> specification, in Table 2 on page 17 for <em>description</em> records and in Table 3 on page 23 for <em>capability</em> records.</p>
<p>Once your Avahi service is configured, you should see your printer on your iPhone. Try printing a page of a PDF, if it prints successfully <strong>congratulations</strong>. If not,</p>
<ul>
<li>If an error is returned, determine if it is a printer fault or a software issue.</li>
<li>Ensure printing a test page from CUPS works, so that we can rule out a CUPS issue.</li>
<li>Ensure the Avahi service being advertised points at your CUPS server and is accessible from your local network. Ensure the firewall is not blocking mDNS or IPP.</li>
<li>Use <a href="https://www.cups.org/doc/man-ipptool.html"><code>ipptool</code></a> to print a test job to the IP, port, and <code>rp</code> path that the mDNS service advertises.</li>
<li>If the request is reaching CUPS, look at the CUPS admin UI under printers and jobs for job status</li>
<li>Ensure the printer isn't paused (by default it will be paused after a failed job)</li>
<li>Look at the <code>systemctl status cups.service</code> output and logs</li>
</ul>
<h3 id="using-nginx-as-a-cups-ui-proxy">Using Nginx as a CUPS UI proxy</h3>
<p>If you'd like to make CUPS available on the standard HTTP/HTTPS port, you can install nginx:</p>
<pre><code class="language-sh">; sudo dnf install nginx
</code></pre>
<p>For a <em>TLS-enabled server</em>, add this configuration to <code>/etc/nginx/conf.d/cups.conf</code>:</p>
<pre><code>server {
    listen       80;
    listen       [::]:80;
    server_name  cups.home.arpa;
    root         /usr/share/nginx/html;

    return 301 https://$host$request_uri;
}

# Settings for a TLS enabled server.
server {
    listen       443 ssl http2;
    listen       [::]:443 ssl http2;
    server_name  cups.home.arpa;
    root         /usr/share/nginx/html;

    ssl_certificate &quot;/etc/pki/nginx/server.crt&quot;;
    ssl_certificate_key &quot;/etc/pki/nginx/private/server.key&quot;;
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  10m;
    ssl_ciphers PROFILE=SYSTEM;
    ssl_prefer_server_ciphers on;

    location / {
        proxy_pass http://127.0.0.1:631/;
        proxy_set_header Host &quot;127.0.0.1&quot;;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
    }

    error_page 404 /404.html;
    location = /404.html {
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    }
}
</code></pre>
<p>With a TLS-enabled server, you need to create some PKI and add the cert to both the server running CUPS and your laptop. The easiest way to set up PKI is using <a href="https://blog.cloudflare.com/introducing-cfssl/"><code>cfssl</code></a>. By setting <code>server_name</code> to <code>cups.home.arpa</code>, a DNS alias I added in my router (pfSense) which points to the same IP that DHCP assigns to the VM, I'm able to host multiple web interfaces for different sites on the same VM.</p>
<h2 id="why-print-in-2023">Why print in 2023?</h2>
<p>Aside from the ocassional form, I often print recipes from the <a href="https://cooking.nytimes.com/">NYT Cooking</a> app. Their iOS app produces beautiful and readable print-outs in crisp black-and-white serif text, with ingredients on the left-hand column and steps on the right. Printing recipes allows me to dirty my hands cooking without needing to constantly unlock my phone.</p>
<div class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn:1">
<p>For more information about mDNS, see the book <a href="https://www.oreilly.com/library/view/zero-configuration-networking/0596101007/">Zero Configuration Networking: The Definitive Guide</a>.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>CUPS parses the <code>printers.conf</code> file in the <a href="https://github.com/apple/cups/blob/ec055da6794984133d48cc376f04e10af62b64dc/scheduler/printers.c#L937"><code>cupsdLoadAllPrinters</code></a> routine, which contains the names of the options. <code>man printers.conf</code> notes: &quot;The name, location, and format of this file are an implementation detail that will change in future releases of CUPS.&quot;&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Adobe no longer hosts a copy of the spec, but MIT hosts a <a href="https://web.mit.edu/PostScript/Adobe/Documents/5003.PPD_Spec_v4.3.pdf">copy</a> of several Adobe tech notes; which can be accessed by replacing the file name in the URL with the orignal Adobe file name e.g. <code>5003.PPD_Spec_v4.3.pdf</code>.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2023-06-06-zapata-history</id>
    <title>History of Zapata</title>
    <author><name>Jim Dixon</name></author>
    <link href="https://connor.zip/posts/2023-06-06-zapata-history" />
    <published>2023-06-06T00:00:00-05:00</published>
    <summary>A History of Zapata Telephony and how it relates to Asterisk PBX.</summary>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>A history of the Zapata project, taken from <a href="http://zapatatelephony.org/">zaptelephony.com</a>, is reproduced below. The legacy of the Zapata and the Zaptel drivers from the early 2000s live on in the the Digium T1 and TDM cards and in the DAHDI drivers. Jim Dixon passed away in December, 2022.</p>
<blockquote>
<p>About 25-30 or so years ago, AT&amp;T started offering an API (well, one to an extent, at least) allowing users to customize functionality of their Audix voicemail/attendant system which ran on an AT&amp;T 3BX usually 3B10) Unix platform. This system cost thousands of dollars a port, and had very limited functionality.</p>
<p>In an attempt to make things more possible and attractive (especially to those who didnt have an AT&amp;T PBX or Central Office switch to hook Audix up to) a couple of manufacturers came out with a card that you could put in your PC, which ran under MS-DOS, and answered one single POTS line (loopstart FXO only). These were rather low quality, compared with today's standards (not to mention the horrendously pessimal environment in which they had to run), and still cost upwards of $1000 each. Most of these cards ended up being really bad sounding and flaky personal answering machines.</p>
<p>In 1985 or so, a couple of companies came out with pretty-much decent 4 port cards, that cost about $1000 each (wow, brought the cost down to $250 per port!). They worked MUCH more reliably then their single port predecessors, and actually sounded pretty decent, and you could actually put 6 or 8 of them in a fast 286 machine, so a 32 port system was easy to attain. As a result the age of practical Computer Telephony had begun.</p>
<p>As a consultant, I have been working heavily in the area of Computer Telephony ever since it existed. I very quickly became extremely well- versed in the hardware, software and system design aspects of it. This was not difficult, since I already had years of experience in non-computer based telephony.</p>
<p>After seeing my customers (who deployed the systems that I designed, in VERY big ways) spending literally millions of dollars every year (just one of my customers alone would spend over $1M/year alone, not to mention several others that came close) on high density Computer Telecom hardware.</p>
<p>It really tore me apart to see these people spending $5000 or $10000 for a board that cost some manufacturer a few hundred dollars to make. And furthermore, the software and drivers would never work 100% properly. I think one of the many reasons that I got a lot of work in this area, was that I knew all the ways in which the stuff was broken, and knew how to work around it (or not).</p>
<p>In any case, the cards had to be at least somewhat expensive, because they had to contain a reasonable amount of processing power (and not just conventional processing, DSP functionality was necessary), because the PC's to which they were attached just didnt have much processing power at that time.</p>
<p>Very early on, I knew that someday in some &quot;perfect&quot; future out there over the horizon, it would be commonplace for computers to handle all of the necessary processing functionality internally, making the necessary external hardware to connect up to telecom interfaces VERY inexpensive and in some cases trivial.</p>
<p>Accordingly, I always sort of kept a corner of an eye out for what the &quot;Put on your seatbelts, youve never seen one this fast before&quot; processor throughput was becoming over time, and in about the 486-66DX2 era, it looked like things were pretty much progressing at a sort of fixed exponential rate. I knew, especially after the Pentium processors came out, that the time for internalization of Computer Telephony was going to be soon, so I kept a much more watchful eye out.</p>
<p>I figured that if I was looking for this out there, there <em>must</em> be others thinking the same thing, and doing something about it. I looked, and searched and waited, and along about the time of the PentiumIII-1000 (100 MHz Bus) I finally said, &quot;gosh these processors CLEARLY have to be able to handle this&quot;.</p>
<p>But to my dismay, no one had done anything about this. What I hadn't realized was that my vision was 100% right on, I just didnt know that <em>I</em> was going to be one that implemented it.</p>
<p>In order to prove my initial concept I dug out an old Mitel MB89000C &quot;ISDN Express Development&quot; card (an ISA card that had more or less one-of-everything telecom on it for the purpose of designing with their telecom hardware) which contained a couple of T-1 interfaces and a cross-point matrix (Timeslot- Interchanger). This would give me physical access from the PC's ISA bus to the data on the T-1 timeslots (albeit not efficiently, as it was in 8 bit I/O and the TSI chip required MUCHO wait states for access).</p>
<p>I wrote a driver for the kludge card (I had to make a couple of mods to it) for FreeBSD (which was my OS of choice at the time), and determined that I could actually reliably get 6 channels of I/O from the card. But, more importantly, the 6 channels of user-space processing (buffer movement, DTMF decoding, etc), barely took any CPU time at all, thoroughly proving that the 600MHZ PIII I had at the time could probably process 50-75 ports if the BUS I/O didnt take too much of it.</p>
<p>As a result of the success (the 'mie' driver as I called it) I went out and got stuff to wire wrap a new ISA card design that made efficient use of (as it turns out all of) the ISA bus in 16 bit mode with no wait states. I was successful in getting 2 entire T-1's (48 channels) of data transferred over the bus, and the PC was able to handle it without any problems.</p>
<p>So I had ISA cards made, and offered them for sale (I sold about 50 of them) and put the full design (including board photo plot files) on the Net for public consumption.</p>
<p>Since this concept was so revolutionary, and was certain to make a lot of waves in the industry, I decided on the Mexican revolutionary motif, and named the technology and organization after the famous Mexican revolutionary Emiliano Zapata. I decided to call the card the &quot;tormenta&quot; which, in Spanish, means &quot;storm&quot;, but contextually is usually used to imply a &quot;<em>BIG</em> storm&quot;, like a hurricane or such.</p>
<p>That's how Zapata Telephony started.</p>
<p>I wrote a complete driver for the Tormenta ISA card for *BSD, and put it out on the Net. The response I got, with little exception was &quot;well that's great for BSD, but what do you have for Linux?&quot;</p>
<p>Personally, Id never even seen Linux run before. But, I can take a hint, so I went down to the local store (Fry's in Woodland Hills) and bought a copy of RedHat Linux 6.0 off the shelf (I think 7.0 had JUST been released but was not available on shelf yet). I loaded it into a PC, (including full development stuff including Kernel sources). I poked around in the driver sources until I found a VERY simple driver that had all the basics, entry points, interfaces, etc (I used the Video Spigot driver for the most part), and used it to show me how to format (well at least to be functional) a minimal Linux driver. So, I ported the BSD driver over to Linux (actually wasnt <em>that</em> difficult, since most of the general concepts are roughly the same). It didnt have support for loadable kernel modules (heck what was that? in BSD 3.X you have to re-compile the Kernel to change configurations. The last system I used with loadable drivers was VAX/VMS.) but it did function (after you re-compiled a kernel with it included). Since my whole entire experience with Linux consisted of installation and writing a kernel module, I <em>knew</em> that it <em>had</em> to be just wrong, wrong, wrong, full of bad, obnoxious, things, faux pauses, and things that would curl even a happy Penguin's nose hairs.</p>
<p>With this in mind, I announced/released it on the Net, with the full knowledge that some Linux Kernel dude would come along, laugh, then barf, then laugh again, then take pity on me and offer to re-format it into &quot;proper Linuxness&quot;.</p>
<p>Within 48 hours of its posting I got an email from some dude in Alabama (Mark Spencer), who offered to do exactly that. Not only that he said that he had something that would be perfect for this whole thing (Asterisk).</p>
<p>At the time, Asterisk was a functional concept, but had no real way of becoming a practical useful thing, since it didnt, at that time, have a concept of being able to talk directly (or very well indirectly for that matter, being that there wasnt much, if any, in the way of practical VOIP hardware available) to any Telecom hardware (phones, lines, etc). Its marriage with the Zapata Telephony system concept and hardware/driver/ library design and interface allowed it to grow to be a real switch, that could talk to real telephones, lines, etc.</p>
<p>Additionally Mark has nothing short of brilliant insight into VOIP, networking, system internals, etc., and at the beginning of all this had a great interest in Telephones and Telephony. But he had limited experience in Telephone systems, and how they work, particularly in the area of telecom hardware interfaces. From the beginning I was and always have been there, to help him in these areas, both providing information, and implementing code in both the drivers and the switch for various things related to this. We, and now more recently others have made a good team (heck I ask him stuff about kernels, VOIP, and other really esoteric Linux stuff all the time), working for the common goal of bringing the ultimate in Telecom technology to the public at a realistic and affordable price.</p>
<p>Since the ISA card, I designed the &quot;Tormenta 2 PCI Quad T1/E1&quot; card, which Mark marketed as the Digium T400P and E400P, and others have and still are as different part numbers, also). All of the design files (including photo plot files) are available on the this website for public consumption.</p>
<p>As anyone can see, with Mark's dedicated work (and a lot of Mine and other people's) on the Zapata Telephony drivers (now called &quot;DAHDI&quot;) and the Asterisk software, the technologies have come a long, long way, and continue to grow and improve every day.</p>
</blockquote>

      </div>
    </content>
  </entry>
  
  <entry>
    <id>https://connor.zip/posts/2022-11</id>
    <title>Home Phone Service</title>
    <author><name>Connor Taffe</name></author>
    <link href="https://connor.zip/posts/2022-11" />
    <published>2022-11-01T00:00:00-05:00</published>
    <summary>A tutorial on how to set up your own home phone service.</summary>
    
    <media:content url="https://connor.zip/resources/images/2022-11/trimline.jpg" medium="image" width="800" height="800"/>
    
    <content type="xhtml">
      <div xmlns="http://www.w3.org/1999/xhtml">
        <p>Running your own home phone service, this guide will cover the following setups:</p>
<p>Proxying SIP traffic from your home without a static IP or NAT traversal.</p>
<ul>
<li>Twilio Elastic SIP Trunking (or any SIP provider)</li>
<li>AWS node
<ul>
<li>Asterisk</li>
<li>Wireguard</li>
</ul>
</li>
<li>HP DL380 G7
<ul>
<li>VMWare</li>
<li>Pfsense</li>
<li>Wireguard</li>
</ul>
</li>
</ul>
<p>How to connect a SIP phone, like a Cisco SPA506g.</p>
<ul>
<li>Any asterisk server</li>
<li>DC adapter, PoE injector, or PoE switch</li>
<li>Cisco SPA506g on the same network</li>
</ul>
<p>How to connect a lot of analog phones, and even modems, using T1.</p>
<ul>
<li>HP DL380 G4
<ul>
<li>PCI slots</li>
<li>Digium TE405P T1 PCI cards</li>
<li>Asterisk</li>
<li>DAHDI drivers (with echo module)</li>
</ul>
</li>
<li>Adit 600
<ul>
<li>FXS cards</li>
</ul>
</li>
<li>Lucent PortMaster 3</li>
</ul>
<p>How to connect a remote set of analog phones over Wifi.</p>
<ul>
<li>Dell Optiplex
<ul>
<li>PCI slot, molex connector to power card</li>
<li>Digium TDM410P with FXS modules</li>
<li>Asterisk</li>
<li>DAHDI drivers</li>
</ul>
</li>
</ul>
<h1 id="using-t1">Using T1</h1>
<p>How to connect a lot of analog phones, and even modems, using T1.</p>
<figure class="graphviz">
<svg width="1111pt" height="134pt" viewBox="0.00 0.00 1111.25 134.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 130)"><polygon fill="white" stroke="none" points="-4,4 -4,-130 1107.25,-130 1107.25,4 -4,4"/><!-- twilio --><g id="node1" class="node"><title>twilio</title><polygon fill="none" stroke="black" points="75,-74 0,-74 0,-20 75,-20 75,-74"/><text text-anchor="middle" x="37.5" y="-42.7" font-family="Times,serif" font-size="14.00">Twilio</text></g><!-- vm --><g id="node2" class="node"><title>vm</title><polygon fill="none" stroke="black" points="281.25,-74 130.5,-74 130.5,-20 281.25,-20 281.25,-74"/><text text-anchor="middle" x="205.88" y="-42.7" font-family="Times,serif" font-size="14.00">Asterisk / AWS VM</text></g>
<!-- twilio&#45;&gt;vm -->
<g id="edge1" class="edge">
<title>twilio&#45;&gt;vm</title>
<path fill="none" stroke="black" d="M75.42,-47C88.44,-47 103.68,-47 119.03,-47"/>
<polygon fill="black" stroke="black" points="118.69,-50.5 128.69,-47 118.69,-43.5 118.69,-50.5"/>
<text text-anchor="middle" x="102.75" y="-51.7" font-family="Times,serif" font-size="14.00">SIP</text>
</g>
<!-- server -->
<g id="node3" class="node">
<title>server</title>
<polygon fill="none" stroke="black" points="525.75,-74 345.75,-74 345.75,-20 525.75,-20 525.75,-74"/>
<text text-anchor="middle" x="435.75" y="-42.7" font-family="Times,serif" font-size="14.00">Asterisk / HP DL380 G4</text>
</g>
<!-- vm&#45;&gt;server -->
<g id="edge2" class="edge">
<title>vm&#45;&gt;server</title>
<path fill="none" stroke="black" d="M281.49,-47C298.21,-47 316.21,-47 333.8,-47"/>
<polygon fill="black" stroke="black" points="333.76,-50.5 343.76,-47 333.76,-43.5 333.76,-50.5"/>
<text text-anchor="middle" x="313.5" y="-51.7" font-family="Times,serif" font-size="14.00">AIX2</text>
</g>
<!-- t1card -->
<g id="node5" class="node">
<title>t1card</title>
<polygon fill="none" stroke="black" points="708,-74 582.75,-74 582.75,-20 708,-20 708,-74"/>
<text text-anchor="middle" x="645.38" y="-42.7" font-family="Times,serif" font-size="14.00">Digium TE420</text>
</g>
<!-- server&#45;&gt;t1card -->
<g id="edge3" class="edge">
<title>server&#45;&gt;t1card</title>
<path fill="none" stroke="black" d="M526.01,-47C541.01,-47 556.43,-47 570.97,-47"/>
<polygon fill="black" stroke="black" points="570.91,-50.5 580.91,-47 570.91,-43.5 570.91,-50.5"/>
<text text-anchor="middle" x="554.25" y="-51.7" font-family="Times,serif" font-size="14.00">PCI</text>
</g>
<!-- adit600 -->
<g id="node4" class="node">
<title>adit600</title>
<polygon fill="none" stroke="black" points="885,-126 795,-126 795,-72 885,-72 885,-126"/>
<text text-anchor="middle" x="840" y="-94.7" font-family="Times,serif" font-size="14.00">Adit 600</text>
</g>
<!-- phone1 -->
<g id="node6" class="node">
<title>phone1</title>
<polygon fill="none" stroke="black" points="1103.25,-126 981.75,-126 981.75,-72 1103.25,-72 1103.25,-126"/>
<text text-anchor="middle" x="1042.5" y="-94.7" font-family="Times,serif" font-size="14.00">Rotary Phone</text>
</g>
<!-- adit600&#45;&gt;phone1 -->
<g id="edge5" class="edge">
<title>adit600&#45;&gt;phone1</title>
<path fill="none" stroke="black" d="M885.5,-99C910.34,-99 941.88,-99 970.09,-99"/>
<polygon fill="black" stroke="black" points="970.02,-102.5 980.02,-99 970.02,-95.5 970.02,-102.5"/>
<text text-anchor="middle" x="951.75" y="-103.7" font-family="Times,serif" font-size="14.00">FXS</text>
</g>
<!-- t1card&#45;&gt;adit600 -->
<g id="edge4" class="edge">
<title>t1card&#45;&gt;adit600</title>
<path fill="none" stroke="black" d="M708.38,-63.74C732.59,-70.28 760.13,-77.71 783.66,-84.06"/>
<polygon fill="black" stroke="black" points="782.48,-87.37 793.05,-86.59 784.3,-80.61 782.48,-87.37"/>
<text text-anchor="middle" x="733.12" y="-76.14" font-family="Times,serif" font-size="14.00">T1</text>
</g>
<!-- portmaster -->
<g id="node7" class="node">
<title>portmaster</title>
<polygon fill="none" stroke="black" points="921.75,-54 758.25,-54 758.25,0 921.75,0 921.75,-54"/>
<text text-anchor="middle" x="840" y="-22.7" font-family="Times,serif" font-size="14.00">Lucent Portmaster 3</text>
</g>
<!-- t1card&#45;&gt;portmaster -->
<g id="edge6" class="edge">
<title>t1card&#45;&gt;portmaster</title>
<path fill="none" stroke="black" d="M708.38,-40.56C720.55,-39.3 733.55,-37.95 746.46,-36.61"/>
<polygon fill="black" stroke="black" points="746.67,-40.1 756.26,-35.59 745.95,-33.14 746.67,-40.1"/>
<text text-anchor="middle" x="733.12" y="-42.9" font-family="Times,serif" font-size="14.00">T1</text>
</g>
</g>
</svg>
</figure>
<!--
dell: Dell Optiplex
server -> dell: IAX2
tdmcard: Digium TDM410P
dell -> tdmcard: PCI
phone2: Rotary Phone
tdmcard -> phone2: FXS
cisco: Cisco SPA506g
server -> cisco: SIP
-->
<p>The diagram above illustrates the flow of an incoming call to the system. First, the call from the Publically Switched Telephone Network (PSTN) is routed through our Session Initiation Protocol (SIP) service provider, mine is Twilio. Twilio communicates with an Asterisk server via SIP, I use a small VM hosted on AWS for SIP termination for two reasons:</p>
<ul>
<li>To maintain a static IP using an Elastic IP, so that Twilio can reach us reliably. With a residential connection, your IP may be changed periodically by your ISP.</li>
<li>To ensure that the RTP packets containing audio aren't <a href="https://n2net.com/the-nat-and-sip-problem-heres-how-to-fix-it/">impacted by NAT</a>.</li>
</ul>
<p>More information is available in <a href="/resources/pdfs/twilio-asterisk-sip-trunking-configuration-guide.pdf">Asterisk with Twilio Elastic SIP Trunking Configuration Guide</a>.</p>
<p>Once the call is received by Asterisk, it places an outgoing call to the Asterisk server on my home network via Inter-Asterisk eXchange protocol (IAX2). The servers communicate over a Wireguard VPN, which allows Asterisk in the cloud to reach my home network (which may have a dynamic IP) at a static IP provided by the VPN. I use pfSense as my home router, so traffic to and from the WireGuard network is routed transparently on my home network and only the cloud server needs a WireGuard client installed; but using the WireGuard client on two Linux servers is a good and simple solution.</p>
<figure>
<img src="/resources/images/2022-11/hp-dl380g4.jpg" alt="HP DL380 G4" />
<figcaption>HP DL380 G4</figcaption>
</figure>
<p>The Asterisk server on my home network is an HP DL380 G4, an old server which I use because of its PCI slots (3.3V), which allow me to use the older and cheaper Digium TE420 PCI cards. Fun fact: the server comes with two gigabit ethernet ports on the back, with Ubuntu Server it's simple to bond these ports together into one 2gpbs connection -- the <a href="https://help.ubuntu.com/community/UbuntuBonding">documentation</a> is detailed but there's a simple wizard at installation time.</p>
<figure>
<img src="/resources/images/2022-11/te420.jpg" alt="Digium TE420" />
<figcaption>Digium TE420</figcaption>
</figure>
<p>The TE420 cards, like other Digium cards, uses the <a href="https://wiki.asterisk.org/wiki/display/DAHDI/DAHDI">DAHDI drivers</a>. The driver may need to be compiled manually so that the <a href="http://www.rowetel.com/wordpress/?page_id=454"><code>oslec</code></a> driver (the <a href="https://github.com/torvalds/linux/tree/master/drivers/misc/echo"><code>echo</code></a> module in the Linux kernel) can be used for echo cancelation, for more info read this interview: <a href="/resources/pdfs/oslec-interview.pdf">Oslec, the Open Source Line Echo Canceller</a>. For older TDM410 cards, manual compilation is a necessity since they've been disabled. For now they can be re-enabled by adding this entry</p>
<pre><code class="language-c">{ 0xd161, 0x8005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &amp;wctdm410 },
</code></pre>
<p>to the <code>DEFINE_PCI_DEVICE_TABLE</code> array in <code>drivers/dahdi/wctdm24xxp/base.c</code>.</p>
<p>Once the Asterisk server on my home network receives the call over IAX2, it can route to multiple systems via T1, through the TE420 card:</p>
<ul>
<li>To an Adit 600, which provides dozens of modem-quality subscriber lines.</li>
<li>To a Lucent PortMaster 3, a digital modem bank providing 56k connections.</li>
</ul>
<figure>
<img src="/resources/images/2022-11/adit600.jpg" alt="Adit 600" />
<figcaption>Adit 600</figcaption>
</figure>
<p>The <a href="https://www.netsolutionworks.com/force10/carrier-transport-access/force10-adit-600.asp">Adit 600</a> is a Converged Services Access Gateway which is capable of a whole slew of functions that aren't that useful anymore. But when configured with FXS cards, it can act as a channel bank -- converting from T1 to subscriber lines or plain old telephone service (POTS). This means we can use our TE420 card in conjunction with an Adit 600 and a 66 block to break out up to 48 individual telephone lines. On a good day, each is available on eBay for under $100, making this the cheapest way to get this volume of modem-quality POTS lines that I've found.</p>
<h2 id="what-does--modem-quality--mean">What does <em>modem quality</em> mean?</h2>
<p>A standard POTS line, called narrowband, is sampled at 8 kHz. It's optimized for human voice between 300-4,000 Hz. To produce a 4 kHz signal reliably e.g. to have a <a href="https://en.wikipedia.org/wiki/Nyquist_frequency"><em>Nyquist frequency</em></a> of 4 kHz, a sampling rate double or 8 kHz is necessary. Twenty four of these PCM streams are time-division multiplexed (TDM) onto a T1 line, so the line operates at 24 times the speed of a single line and each end synchronizes to switch between each line. This is why T1 is synchronous, rather than asynchronous like Ethernet, each line is always &quot;up&quot; and the bandwidth is always available for a call.</p>
<p>This 8 kHz sampling is hijacked by high-speed digital modems. With direct access to a T1 connection, these modems can ensure that each individual sample represents one of the modem sounds, saturating the available bandwidth with encoded data. Since T1 uses bit-robbing to encode signal information, the lowest bit of each eight bit time-slot is clobbered, meaning the maximum data throughput is 8 kHz times 7 bits is 56kbps. Importantly, each bit must make it through the system unharmed. Modern lossy audio compression, and <a href="https://asterisk-users.digium.narkive.com/rCbOWH0E/configure-dahdi-with-tdm410-for-analog-modem-calls#post2">Digium's TDM cards</a> can't provide that fidelity, and so modem communication is unreliable at best.</p>
<p>For more information on T1, I recommend <a href="https://www.oreilly.com/library/view/t1-a-survival/0596001274"><em>T1: A Survival Guide</em></a> by Matthew S. Gast. It's one of the few resources on this topic I could find.</p>
<h2 id="configuring-asterisk">Configuring Asterisk</h2>
<p>The below is taken from my notes on configuring an already functioning Asterisk installation for a TE420 card once the DAHDI drivers have been installed:</p>
<ul>
<li>
<p>Add <code>wct4xxp</code> to <code>/etc/dahdi/modules</code>.</p>
</li>
<li>
<p>Edit <code>/etc/dahdi/genconf_parameters</code> and change</p>
<pre><code>echo_can		oslec
</code></pre>
<p>And also uncomment the below from <code>/etc/dahdi/init.conf</code>:</p>
<pre><code>DAHDI_UNLOAD_MODULES=&quot;dahdi echo&quot;
</code></pre>
<p>I've not had any issues with <a href="http://www.rowetel.com/wordpress/?page_id=454"><code>oslec</code></a>, and it works well when using cards with no hardware echo cancellation (which is often expensive).</p>
</li>
<li>
<p>After running <code>dahdi_genconf</code>, edit <code>/etc/dahdi/system.conf</code> to reflect the channels you actually need. Here I use two <em>channelized T1</em> or cT1 channels to connect to an Adit 600 channel bank, and two ISDN lines to connect to a Lucent Portmaster 3. Also edit <code>/etc/asterisk/dahdi-channels.conf</code> to reflect the same changes, assign the correct Asterisk orgiginating context, etc. I've added a base configuration to <code>/etc/asterisk/chan_dahdi.conf</code> and edited the generated spans to extend it. Then, I can define specific phones which inherit from their spans and add caller id.</p>
</li>
</ul>
<p>The <code>/etc/dahdi/system.conf</code> file:</p>
<pre><code># Autogenerated by /sbin/dahdi_genconf on Sat Dec  4 00:28:52 2021
# If you edit this file and execute /sbin/dahdi_genconf again,
# your manual changes will be LOST.
# Dahdi Configuration File
#
# This file is parsed by the Dahdi Configurator, dahdi_cfg
#
# Span 1: TE4/0/1 &quot;T4XXP (PCI) Card 0 Span 1&quot; ESF/B8ZS RED
span=1,1,0,esf,b8zs
# termtype: te
fxols=1-24
echocanceller=oslec,1-24

# Span 2: TE4/0/2 &quot;T4XXP (PCI) Card 0 Span 2&quot; ESF/B8ZS RED
span=2,2,0,esf,b8zs
# termtype: te
fxols=25-48
echocanceller=oslec,25-48

# Span 3: TE4/0/3 &quot;T4XXP (PCI) Card 0 Span 3&quot; ESF/B8ZS RED
span=3,3,0,esf,b8zs
# termtype: te
bchan=49-71
dchan=72
echocanceller=oslec,49-71

# Span 4: TE4/0/4 &quot;T4XXP (PCI) Card 0 Span 4&quot; (MASTER) ESF/B8ZS RED
span=4,4,0,esf,b8zs
# termtype: te
bchan=73-95
dchan=96
echocanceller=oslec,73-95

# Global data

loadzone	= us
defaultzone	= us
</code></pre>
<p>The <code>/etc/asterisk/dahdi-channels.conf</code> file:</p>
<pre><code>; Autogenerated by /sbin/dahdi_genconf on Sat Dec  4 00:28:52 2021
; If you edit this file and execute /sbin/dahdi_genconf again,
; your manual changes will be LOST.
; Dahdi Channels Configurations (chan_dahdi.conf)
;
; This is not intended to be a complete chan_dahdi.conf. Rather, it is intended
; to be #include-d by /etc/chan_dahdi.conf that will include the global settings
;

; Span 1: TE4/0/1 &quot;T4XXP (PCI) Card 0 Span 1&quot; ESF/B8ZS RED
[span-1](phones)
group=0,11
context=from-internal
signalling=fxo_ls
dahdichan=1-24

; Span 2: TE4/0/2 &quot;T4XXP (PCI) Card 0 Span 2&quot; ESF/B8ZS RED
[span-2](phones)
group=0,12
context=from-internal
signalling=fxo_ls
dahdichan=25-48

; Span 3: TE4/0/3 &quot;T4XXP (PCI) Card 0 Span 3&quot; ESF/B8ZS RED
[span-3](phones)
group=0,13
context=from-internal
switchtype=national
signalling=pri_net
dahdichan=49-71

; Span 4: TE4/0/4 &quot;T4XXP (PCI) Card 0 Span 4&quot; (MASTER) ESF/B8ZS RED
[span-4](phones)
group=0,14
context=from-internal
switchtype=national
signalling=pri_net
dahdichan=73-95

; Overrides to add caller id or mailboxes
[phone-1](span-1)
callerid=Closet &lt;1001&gt;
</code></pre>
<p>The <code>/etc/asterisk/chan_dahdi.conf</code> file:</p>
<pre><code>[phones]
echocancel=yes
threewaycalling=yes
transfer=yes
usecallerid=yes
callerid=asreceived

#include /etc/asterisk/dahdi-channels.conf
</code></pre>
<p>In a working system you should see <code>systemctl status dahdi.service</code> report that it is using the <code>wct4xxp</code> driver. The DAHDI drivers will need to be recompiled when the kernel is updated, you may notice on reboot that the DAHDI service reports no drivers because of this.</p>
<h1 id="adendum">Adendum</h1>
<p>An interesting anecdote about the name T1 taken from <a href="https://www.dcbnet.com/notes/0103t.html">Data Comm</a> is reproduced below:</p>
<blockquote>
<p>After reading our white paper, a number of customers asked us &quot;where'd the &quot;T&quot; come from in T-1, T-3, etc.? There are many stories floating around (such as T=Time, T= Transmission, T=Terrestrial, Just the next letter in sequence, etc.). We thought the best place to get the true story would be from someone who helped develop this technology. So, we asked our friend Dr John Pan, who worked for Bell Labs during the time T carrier was being developed. Dr Pan is now Vice President of Loop Telecommunications International, our strategic partner in the T-1 DACS, and DSU/access device field since 1997.</p>
<p>Here's what he had to say...</p>
<p>The &quot;T&quot; in T1</p>
<p>The story of the &quot;T&quot; in T1 has its roots way back in 1917, when AT&amp;T deployed the first carrier system, called the &quot;A&quot; system. A total of 7 A-systems, providing four voice channels over an open wire pair, were ever deployed. Then came successive analog frequency division multiplex systems named B, C, D, and so forth. Few of these carrier systems ever saw commercial service. AT&amp;T, being a monopoly, could well afford many dogs. A notable success is the &quot;L&quot; system, providing 600 (L1) and later 1800 (L3) voice channels over a pair of coaxial cables, in long haul service from 1944 to 1984, until breakup of the Bell System forced AT&amp;T to migrate to optical fiber. The last of the analog carrier system is the &quot;N&quot; system and its variants, providing 12 voice channels for intracity short haul. Along with the even more forgettable &quot;O&quot;, &quot;P&quot;, and &quot;U&quot; systems, the emergence of &quot;T&quot; killed them all.</p>
<p>In 1957, when digital systems were first proposed and developed, the boss decided to skip Q, R, S, and to use T, for Time Division. The idea was this will be the world's first time division system. Interestingly, except for &quot;U&quot;, another system that never made it, this naming system ended.</p>
<p>Vaiants of T1, called T1C, T2, T3, and T4, all died. They are survived by signals that would have been carried on all these systems, called DS1, DS2, DS3, and DS4.</p>
<p>Among the successor to T1 vying for success at Bell Labs, digital coaxial cables, digital microwave, satellite, circular waveguide, optical mirror, and optical fiber, none achieved commercial success save fiber.</p>
</blockquote>
<p>More links:</p>
<ul>
<li><a href="/resources/pdfs/portmaster3-hardware-installation-guide.pdf">Lucent PortMaster 3 Hardware Installation Guide</a></li>
<li><a href="/resources/pdfs/portmaster3-configuration-guide.pdf">Lucent PortMaster 3 Configuration Guide</a></li>
</ul>

      </div>
    </content>
  </entry>
  
</feed>
