Jekyll2021-11-19T04:26:58+00:00https://bobbycyiii.github.io/feed.xmlThick and thinMostly math with RobertRobert C. Haraway, IIIRepresenting permutations relatively compactly2021-03-10T00:00:00+00:002021-03-10T00:00:00+00:00https://bobbycyiii.github.io/2021/03/10/representing-permutations-relatively<p>Here is a more relative way to represent a permutation with the same economical number of bits.</p>
<h2 id="index-vs-offset">Index <em>vs.</em> offset</h2>
<p>Long have programming languages differed on whether indexing should start at 1 or at 0.
Dijkstra settled the matter (for himself at least) in his manuscript EWD831, <a href="https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html">Why numbering should start at zero</a>.
Still, languages in the Fortran tradition (Fortran itself, R, Matlab, Julia) used 1-based indexing.
My opinion on this, after much reflection, accords with a <a href="https://discourse.julialang.org/t/whats-the-big-deal-0-vs-1-based-indexing/1102/5?u=bobbycyiii">comment</a> by user <code class="language-plaintext highlighter-rouge">ihnorton</code> on Julia’s Discourse:</p>
<blockquote>
<p>… [T]he choice comes down to a preference for counting (1-based) versus offsets (0-based).</p>
</blockquote>
<p>I would prefer to have this read <em>indexing</em> versus <em>offsets</em>.
Unfortunately then my stance will likely please no one.
Contrary to Dijkstra I think indexing should start at 1.
Contrary to Julia’s conventions, I prefer to work with offsets (for the reasons set out in EWD831) and eschew references to indices.
But keeping the distinction in mind has finally made me think it possible for me to work productively in Julia.</p>
<h2 id="permutations-by-offsets">Permutations by offsets</h2>
<p>For this reason, the <a href="/2021/03/06/representing-permutations.html">previous post</a>’s code is not quite good enough.
If possible it should be index-base agnostic.
This is quite possible.
Instead of recording the image under a transposition, one records the offset by a transposition.
This is quite easily done; what is more, it has the great virtue of representing the identity map by all zeroes.
Since Julia inspired this, I’ll write it in Julia with the necessary small modifications.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>module TinyPermutations
function act(N, start, sigma, x)
# to offset
x -= start
f, nbits, mask = 0, 0, 0
while f < N
y = f - (mask&sigma)
x = x + (x==f)*(y-x) + (x==y)*(f-x)
sigma >>= nbits
f += 1
b = f > mask
mask += b*(mask+1)
nbits += b
end
# back to index
return x+start
end
export act
end
</code></pre></div></div>Robert C. Haraway, IIIHere is a more relative way to represent a permutation with the same economical number of bits.Representing permutations compactly2021-03-06T00:00:00+00:002021-03-06T00:00:00+00:00https://bobbycyiii.github.io/2021/03/06/representing-permutations<p>Here is a way to represent a permutation with a fairly economical number of bits.</p>
<p>The number of permutations on \(N\) objects is \(N!,\) which is \(\prod_{1 \leq i \leq N}i.\)
Therefore, to specify one of these permutations requires at least \(\left\lceil \sum_{1 \leq i \leq N} \log_2 i \right\rceil\) bits.
I found a way to represent such a permutation with \(\sum_{1 \leq i \leq N} \left\lceil \log_2 i \right\rceil\) bits.
This has probably been discovered before (even several times).
But I still enjoyed coming up with it.</p>
<h1 id="sufficiency">Sufficiency</h1>
<p>Let \(B(m) = \sum_{1 \leq i \leq m} \left\lceil \log_2 i \right\rceil.\)
It is easiest to represent a permutation on no elements, of course.
This requires no bits of data—the permutation must send the element to itself.
All the information is contained, so to speak, in the type of the permutation,
and there is only one such permutation.
And in fact, \(B(0) = 0.\)</p>
<p>So suppose instead that \(\sigma\) is a permutation on \(N+1\) elements.
To fix ideas, let us assume the elements are the numbers \(0, \cdots, N.\)
Then the permutation \((N\ \sigma(N)) \circ \sigma\) fixes \(N.\)
We may therefore regard \(next(\sigma) = (N\ \sigma(N)) \circ \sigma\) as a permutation on \(N\) elements.
By induction, we may represent \(next(\sigma)\) with \(B(N)\) bits.
Since there are \(N+1\) possibilities for \(\sigma(N)\) (for \(N\) itself is a possibility),
we may represent \(\sigma(N)\) with \(\left\lceil \log_2 (N+1) \right\rceil\) bits.
Now, \((N\ \sigma(N)) \circ next(\sigma) = \sigma,\) so \(\sigma(N)\) and \(next(\sigma)\) together determine \(\sigma.\)
Therefore we may represent \(\sigma\) with</p>
\[\left\lceil \log_2 (N+1) \right\rceil + B(N) = B(N+1)\]
<p>bits, completing the induction.</p>
<h1 id="how-to-act">How to act</h1>
<p>By the above argument, we may write</p>
\[\begin{align*}\sigma &= (N\quad \sigma(N)) \circ (N-1\quad next(\sigma)(N-1)) \circ \cdots \\ &= \cdots (N-1\quad next(\sigma)(N-1)) \circ (N\quad \sigma(N)).\end{align*}\]
<p>Hence the initial cycle is the one on the “inside.”
It is easier to shift bits right than to shift them left.
So we will put the initial cycle’s bits in the least significant position possible.
Thus in general, cycles’ bits will be recorded in the least significant position possible, after prior cycles’ bits have been so recorded.</p>
<p>To give an example before formalizing this in code, consider the number <code class="language-plaintext highlighter-rouge">100</code> given in binary.
Now, we want this to represent a permutation \(\sigma\) on four elements.
So we’ll write it as <code class="language-plaintext highlighter-rouge">00100</code> instead.
The least significant bit is <code class="language-plaintext highlighter-rouge">0</code>, indicating the cycle \((1\ 0),\) instead of the identity cycle \((1\ 1).\)
The next bits need to represent one of the three possible elements \(a \in \{0,1,2\}\) in the next cycle \((2\ a).\)
This requires two bits, which here are <code class="language-plaintext highlighter-rouge">10</code>, which is 2 itself.
Thus this represents the cycle \((2\ 2).\)
Finally we require two bits still to represent the four possible elements \(a \in \{0,1,2,3\}\) in the final cycle \((3\ a).\)
These bits are <code class="language-plaintext highlighter-rouge">00</code>, so this cycle is \((3\ 0).\)
All told then we have \(\sigma = (1\ 0) (3\ 0) = (0\ 1\ 3).\)</p>
<p>With that example given, let’s now derive more formally how to act on an element \(x\) via a permutation \(\sigma\) given via this representation.
The overall structure should be something like</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>initialize variables
while (there is another transposition)
apply it to x
update variables
return x
</code></pre></div></div>
<p>How do we tell if there is another tranposition, and if so, what it is?
Considering the previous example, we see that it might be useful to have a bitmask variable to grab a collection of bits; and a variable indicating what element the current transposition considers its initial element in the cycle.
Now, the question is whether all that is enough information to tell when to exit the loop.
If we allow ourselves to refer to the number \(N,\) say as a constant, then yes we can.
If \(f \geq N,\) we can stop, since \(\sigma\) only acts on \(\{0, \cdots, N-1\}.\)
So now we know the algorithm looks like</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>initialize variables
while (f < N)
get next tranposition
apply it to x
update variables
return x
</code></pre></div></div>
<p>Clearly the next transposition is \((f\ y)\) where \(y\) is the number indicated by the next collection of bits.
One convenient way to write this would be <code class="language-plaintext highlighter-rouge">y ← mask & bits</code>, where <code class="language-plaintext highlighter-rouge">mask</code> is a mask with the appropriate number of bits; <code class="language-plaintext highlighter-rouge">bits</code> are the remaining bits that we haven’t dealt with from the representation of \(\sigma;\) and <code class="language-plaintext highlighter-rouge">&</code> is bitwise-<em>and</em>, not boolean-<em>and</em>.
So we can rewrite the loop body as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>initialize variables
while (f < N)
y ← mask & bits
if x = f
x ← y
else if x = y
x ← f
update variables
return x
</code></pre></div></div>
<p>This has a branch inside a loop, which is not great.
Identifying booleans with natural numbers in the usual way, viz. <code class="language-plaintext highlighter-rouge">false = 0</code> and <code class="language-plaintext highlighter-rouge">true = 1</code>, we can rewrite the branch as follows.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>initialize variables
while (f < N)
y ← mask & bits
x ← x + (x = f)⋅(y-x) + (x = y)⋅(f-x)
update variables
return x
</code></pre></div></div>
<p>It remains to determine how to update the variables <code class="language-plaintext highlighter-rouge">f</code> and <code class="language-plaintext highlighter-rouge">mask</code>.
Of course <code class="language-plaintext highlighter-rouge">f</code> is easy; just increment it.
Now, <code class="language-plaintext highlighter-rouge">mask</code> is supposed to be \(\left\lceil\log_2 f\right\rceil\) least-significant 1 bits, and 0 bits everywhere else.
That is, it is supposed to be the number \(2^{\left\lceil\log_2 f\right\rceil + 1} - 1.\)
But putting that in the update would evaluate a discrete logarithm.
We can do better than that.
First, we know <code class="language-plaintext highlighter-rouge">f</code> is representable in \(\left\lceil\log_2 f\right\rceil\) bits.
The maximum number so representable happens to be <code class="language-plaintext highlighter-rouge">mask</code> itself.
If <code class="language-plaintext highlighter-rouge">f+1 > mask</code> then the next <code class="language-plaintext highlighter-rouge">f</code> will require an additional bit.
So instead of evaluating a discrete logarithm, to update the variables we can just write</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>f ← f+1
if f > mask
mask ← 2⋅mask + 1
</code></pre></div></div>
<p>Again, this conditional assignment occurs in a loop.
We may rewrite it as follows.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>b ← f > mask
mask ← mask + b⋅(mask + 1)
</code></pre></div></div>
<p>This assumes that the next interesting bits are already least significant.
So we must also update the <code class="language-plaintext highlighter-rouge">bits</code> variable by shifting it right the appropriate number of bits.
We could divide <code class="language-plaintext highlighter-rouge">bits</code> by <code class="language-plaintext highlighter-rouge">mask+1</code>, but division is best avoided.
Instead we keep track of the number of bits in the mask, <code class="language-plaintext highlighter-rouge">nbits</code>.
We increment it along with <code class="language-plaintext highlighter-rouge">mask</code> when needed.
Thus updating the variables looks like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bits ← bits >> nbits
f ← f+1
b ← f > mask
mask ← mask + b⋅(mask + 1)
nbits ← nbits + b
</code></pre></div></div>
<p>It remains to initialize the variables.
Clearly we begin with <code class="language-plaintext highlighter-rouge">f = 0</code>.<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>
There are no other elements yet that we need to distinguish from <code class="language-plaintext highlighter-rouge">f</code>, so <code class="language-plaintext highlighter-rouge">nbits = 0</code> and <code class="language-plaintext highlighter-rouge">mask = 0</code> likewise.</p>
<p>Therefore, in conclusion, the following Hoare triple should hold.
(Here we use Dijkstra’s guarded command language.)</p>
\[\begin{align*}
\{&N, X: \mathbb{N},\quad \sigma: S_N &\mathbf{constants} \\
&f, nbits, mask, bits, x, y : \mathbb{N} &\mathbf{variables} \\
&bits\mbox{ represents }\sigma &\}
\end{align*}\]
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>f,nbits,mask,x ← 0,0,0,X;
do f < N →
y ← mask & bits
; x ← x + (x=f)⋅(y-x) + (x=y)⋅(f-x)
; bits ← bits >> nbits
; f ← f+1
; b ← f > mask
; mask, nbits ← mask + b⋅(mask+1), nbits + b
od;
</code></pre></div></div>
\[\{ x = \sigma(X)\}\]
<p>In Python we could write this as follows.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def act(N,sigma,x):
f,nbits,mask = 0,0,0
while f < N:
y = mask & sigma
x += (x==f)*(y-x) + (x==y)*(f-x)
sigma >>= nbits
f += 1
b = f > mask
mask += b*(mask+1)
nbits += b
return x
</code></pre></div></div>
<h1 id="some-comments">Some comments</h1>
<p>The above method is redundant, and doesn’t check that all the tranpositions have the form \((n\ m)\) with \(m \leq n.\)
One could write a method to verify the form of the tranposition before using it.
The above code is faster for skipping this verification.</p>
<p>Also, this method is as data-space-efficient as possible for \(N \leq 4.\)
Since I like three-dimensional computational topology, this is all I need.
However, for \(N\leq 4,\) the number of permutations is at most 24.
So a table lookup in this case might be more appropriate than the above function!</p>
<p>The biggest permutations you can fit in one 64-bit word are those on nineteen elements.
Remarkably, these fit exactly in 64 bits with the above representation.
Likewise, one can fit permutations on five elements in a single byte.</p>
<p>Permutations on 32 elements need 124 bits with this representation, and no more elements are possible with 128 bits allowed.
This is a (space) improvement over the “sheep-and-goats” representation in Warren’s excellent <em>Hacker’s Delight</em>, which uses 160 bits to represent permutations on 32 elements.</p>
<h1 id="footnote">Footnote</h1>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>The reader may be concerned about the base case \(\sigma: \emptyset \to \emptyset.\) It is true that the code above will necessarily fail in this case. But it also succeeds in this case, since to run the code, you first need an element to run it on. Assuming we have an element from the empty set lets us conclude the code terminates correctly, even though it also fails. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Robert C. Haraway, IIIHere is a way to represent a permutation with a fairly economical number of bits.Jordan curve illustrations: wiggling a course2021-03-05T00:00:00+00:002021-03-05T00:00:00+00:00https://bobbycyiii.github.io/2021/03/05/generating-jordan-5<p>There is a Jordan curve such that every piecewise-linear path from the inside to the outside intersects the curve infinitely many times.
I define the wiggling precisely here in terms of courses (turn-type sequences).</p>
<h1 id="local-wiggling-to-triples">Local wiggling to triples</h1>
<p>Last time I described the local picture of a wiggled hex curve.
That is, given a hex curve \(\gamma\) and a triangle \(T\) with a turn of \(\gamma,\)
I described what \(f(\gamma) \cap T\) looks like, and gave a picture of this.
Our task now is to draw a hex curve wiggled several times.</p>
<p>Our preferred representation of a hex curve is its course, <em>i.e.</em> its sequence of turn-types.
The local picture in a triangle always wiggles to three oriented components, yielding three courses.
We want to specify triples of courses of the sort one gets by taking the courses of such unions of oriented components in a sequence of consecutively adjacent triangles.</p>
<p>More specifically, let us say an (intermediate) <em>triple</em> is a triple of courses \(trp = (\mu, \upsilon, \kappa)\)
such that for some triangles \(t_0\) and \(t_\nu\) (possibly the same),</p>
<ul>
<li>\(\mu\) beginning at \(t_{0,3}^A\) ends at \(t_{0,4}^A;\)</li>
<li>\(\upsilon\) beginning at \(t_{0,5}^A\) ends at \(t_{\nu,3}^F;\) and</li>
<li>\(\kappa\) beginning at \(t_{\nu,4}^F\) ends at \(t_{\nu,5}^F.\)
Fixing \(t_0\) and a bend in \(t_0\) determines \(t_\nu.\)</li>
</ul>
<p>In the previous post we worked out what these triples were for \(f(\gamma) \cap t\) when \(t\) contained a port turn.
As in that post, we label the components of \(f(\gamma) \cap t\) variously \(M_t,\) \(Y_t,\) and \(C_t.\)
Recall that</p>
<ul>
<li>\(M\) starts from \(t_3^A\) with course \(PSPSSSSPSPS,\) and thus ends at \(t_4^A;\)</li>
<li>\(Y\) starts from \(t_5^A\) with course \(PSPSPSPPSPS,\) and thus ends at \(t_3^F;\) and finally,</li>
<li>\(C\) starts from \(t_4^F\) with course \(PSPSSPSPSPSPPPPSPSPSPSPPSPS,\) and thus ends at \(t_5^F.\)</li>
</ul>
<p>We express this in Python as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>P_becomes = ("PSPSSSSPSPS","PSPSPSPPSPS","PSPSSPSPSPSPPPPSPSPSPSPPSPS")
</code></pre></div></div>
<p>Suppose we follow that port turn in a triangle \(t\) with a starboard turn in another triangle \(t'.\)
For the starboard turn in \(t',\) we get the following, <em>mutatis mutandis</em>:</p>
<ul>
<li>\(M\) starts from \({t'}_3^A\) with course \(PSPSSPSPSPSPSSSSPSPSPSPPSPS,\) and thus ends at \({t'}_4^A;\)</li>
<li>\(Y\) starts from \({t'}_5^A\) with course \(PSPSSPSPSPS,\) and thus ends at \({t'}_3^F;\) and finally,</li>
<li>\(C\) starts from \({t'}_4^F\) with course \(PSPSPPPPSPS,\) and thus ends at \({t'}_5^F.\)</li>
</ul>
<p>(The ordering of triangles along the given sides of \(t'\) are also from port to starboard.)
We can express this likewise in Python as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>S_becomes = ("PSPSSPSPSPSPSSSSPSPSPSPPSPS","PSPSSPSPSPS","PSPSPPPPSPS")
</code></pre></div></div>
<p>(The reader may check that if \(P_b\) is the former triple shown and \(S_b\) the latter,
then these triples are related by \(flip(P_b) = S_b,\) where the function \(flip\)
reverses each string in the triple, swaps \(P\) with \(S\) and <em>vice versa,</em> and reverses the triple,
as in the following code.)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def flip(trp):
lst = list(trp)
lst = [list(reversed(path)) for path in lst]
swap = lambda tok: "P" if tok == "S" else "S"
lst = [[swap(tok) for tok in path] for path in lst]
return tuple(reversed(lst))
</code></pre></div></div>
<p>Thus the following statements hold:</p>
<ul>
<li>\(M_t\) intersects no components of \(f(\gamma) \cap t';\)</li>
<li>\(C_{t'}\) intersects no components of \(f(\gamma) \cap t;\)</li>
<li>\(Y_t\) ends at \(t_3^F;\)</li>
<li>\(M_{t'}\) begins at \({t'}_3^A\) and ends at \({t'}_4^A;\)</li>
<li>\(C_t\) begins at \(t_4^F\) and ends at \(t_5^F;\) and finally,</li>
<li>\(Y_{t'}\) begins at \({t'}_5^A.\)</li>
</ul>
<h1 id="joining-triples">Joining triples</h1>
<p>The union of all these components in the union \(t \cup t'\) therefore again has three oriented components.
Two components have not been fitted together with others.
The first is the \(M\) component of \(f(\gamma) \cap t.\)
The latter is the \(C\) component of \(f(\gamma) \cap t'.\)
Finally, the middle component is composed of the four other components.
Considering the components as oriented arcs,
and labelling them \(M,Y,C\) in \(t\) and \(M',Y',C'\) in \(t'\) in that order,
we have that \(f(\gamma) \cap (t \cup t')\) is also the union of three oriented components,
to wit \(M, Y\ast M' \ast C \ast Y', C'\) in that order.</p>
<p>The same incidence relations hold, with different courses, when following port with port,
starboard with port, or starboard with starboard.
Port to starboard and starboard to starboard are depicted in the following figure.</p>
<p><img src="/assets/images/jordan/joining_components.png" alt="Joining the components together." /></p>
<p>Thus, if \(trp = (M, Y, C)\) and \(trp' = (M', Y', C')\) are two intermediate triples,
we define their <em>join</em> \(trp \ast trp'\) to be \((M, Y\ast M' \ast C \ast Y', C').\)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def join(trp0, trp1):
(m, y, c) = trp0
(mp,yp,cp) = trp1
return (m, y+mp+c+yp, cp)
</code></pre></div></div>
<h1 id="wiggling-a-course">Wiggling a course</h1>
<p>Let \(Tok = \{P,S\}.\)
Let \(Tok^\ast\) be the set of courses.
Following the above, we define the function \(wgl: Tok \to Tok^\ast \times Tok^\ast \times Tok^\ast\) as follows:</p>
\[\begin{multline} wgl(P) = (PSPSSSSPSPS, PSPSPSPPSPS, \\ PSPSSPSPSPSPPPPSPSPSPSPPSPS), \end{multline}\]
\[\begin{multline} wgl(S) = (PSPSSPSPSPSPSSSSPSPSPSPPSPS, \\ PSPSSPSPSPS, PSPSPPPPSPS), \end{multline}\]
<p>so that \(wgl(S) = flip(wgl(P)).\)
Then, given a course \(\Gamma = k_0 k_1 \cdots k_{n-1}\) (with \(k_i \in Tok\)),
we define \(wiggle(\Gamma) = wgl(k_0) \ast wgl(k_1) \ast \cdots \ast wgl(k_{n-1}).\)
(This is well-defined since \(wgl\) is associative.)</p>
<p>With this definition we have almost determined the course of \(f(\gamma)\) given the course \(\Gamma\) of \(\gamma.\)
The above gives us that \(f(\gamma)\) is the union of three hex curves whose three courses are those in the triple \(wiggle(\Gamma).\)
It remains to determine how to join these courses.
Now, \(wiggle(\Gamma)\) is a triple \((MM, YY, CC)\) beginning in some triangle \(t\) and ending in some triangle \(T.\)
Assuming \(\Gamma\) is the course of a closed hex curve, likewise \(wiggle(\Gamma)\) is too.
Thus the aft side of \(t\) and the fore side of \(T\) coincide.
So \(t_i^A\) and \(T_i^F\) coincide along these sides for \(i \in \{3,4,5\}.\)
Now, \(YY\) ends at \(T_3^F\) and \(MM\) begins at \(t_3^A;\)
\(MM\) ends at \(t_4^A\) and \(CC\) begins at \(T_4^F;\) and finally
\(CC\) ends at \(T_5^F\) and \(YY\) begins at \(t_5^A.\)
Thus the complete course of \(f(\gamma)\) is \(YY\ast MM\ast CC.\)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def wiggle(tokens):
trps = map(lambda tok: P_becomes if tok == 'P' else S_becomes, tokens)
tot = trps.__next__()
for trp in trps:
tot = join(tot,trp)
(MM,YY,CC) = tot
return YY + MM + CC
</code></pre></div></div>Robert C. Haraway, IIIThere is a Jordan curve such that every piecewise-linear path from the inside to the outside intersects the curve infinitely many times. I define the wiggling precisely here in terms of courses (turn-type sequences).Jordan curve illustrations: drawing the curve2021-03-05T00:00:00+00:002021-03-05T00:00:00+00:00https://bobbycyiii.github.io/2021/03/05/generating-jordan-6<p>There is a Jordan curve such that every piecewise-linear path from the inside to the outside intersects the curve infinitely many times.
I describe how to draw such a curve here, and do so.</p>
<h1 id="generating-a-point-sequence">Generating a point sequence</h1>
<p>It is easiest to describe wiggling as in the previous post.
One could encode courses directly using turtle graphics.
However, the number of segments to draw in the final picture is quite large.
Suppose we begin with a simplest hex loop, with course \(PPPPPP.\)
Every wiggling multiplies the number of tokens by \(49.\)
After just three wigglings the number of segments is \(705894.\)
We intend to draw the resulting hex curve.
To use turtle graphics would mean to use floating point arithmetic to accomplish rotations by 120 degrees port or starboard.
After so many rotations and additions, it is reasonable to suspect the resulting “curve” wouldn’t close up.
On the other hand, no triple of points in the grid of integers forms an equilateral triangle.
So what we do is work over the integer lattice when calculating a point sequence directly from a course.
Then after all those points are determined, we apply an affine equivalence from the square grid lattice to the penny-packing lattice.
This will give us a closed hex curve, as a sequence of points we may feed to PostScript.</p>
<p>To fix ideas, begin with the lattice \(\Lambda^3\) generated by \(v_s = (1,0)\) and \(v_p = (-1/2,\sqrt{3}/2).\)
We label these vectors so because if a hex curve comes into a vertex at \(vtx = (0,0)\) from \((-1/2, -\sqrt{3}/2)\) along the incoming vector \(v_i = v_s + v_p,\) then a starboard turn at \((0,0)\) would go along \(v_s,\) whereas a port turn at \((0,0)\) would go along \(v_p.\)</p>
<p>In general, at a vertex \(vtx\) a hex curve has an incoming vector \(v_i\)
that is the sum of the outgoing port and starboard vectors \(v_p, v_s.\)
More specifically, letting \(\rho\) be counterclockwise rotation by \(2\pi/3,\)
we have \(v_s = \rho(-v_i)\) and \(v_p = \rho(v_s).\)</p>
<p>Suppose a hex curve takes a starboard turn at \(vtx.\)
Then:</p>
<ul>
<li>the next vertex position would be at \(vtx + v_s;\)</li>
<li>the next incoming vector would be the current \(v_s;\)</li>
<li>the next starboard vector would be \(\rho(-v_s) = -v_p;\) and</li>
<li>the next port vector would be \(\rho(-v_p) = v_i.\)</li>
</ul>
<p>If a hex curve takes a port turn at \(vtx\) instead, then:</p>
<ul>
<li>the next vertex position would be at \(vtx + v_p;\)</li>
<li>the next incoming vector would be the current \(v_p;\)</li>
<li>the next starboard vector would be \(\rho(-v_p) = v_i;\) and</li>
<li>the next port vector would be \(\rho(v_i) = -v_s.\)</li>
</ul>
<p>That is, a starboard turn accomplishes the state change</p>
\[(vtx,v_s,v_p,v_i)\ \leftarrow\ (vtx+v_s,-v_p,v_i,v_s),\]
<p>whereas a port turn instead accomplishes</p>
\[(vtx,v_s,v_p,v_i)\ \leftarrow\ (vtx+v_p,v_i,-v_s,v_p).\]
<p>To convert a course of turn-type movement tokens into a sequence of points, then,
we begin with an appropriate initial collection of incoming, port, and starboard vectors,
and some initial vertex, and we simply apply the appropriate state changes above according
to the sequence of tokens, keeping track of the points seen in order.
We may encode this in Python as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def add(v,w):
return (v[0]+w[0],v[1]+w[1])
def negate(v):
return (-v[0],-v[1])
def to_points(course, start):
vtx,vs,vp,vi = start
points = [vtx]
toks = list(course)
toks.reverse()
while len(toks) != 0:
tok = toks.pop()
if tok == 'S':
vtx,vs,vp,vi = add(vtx,vs),negate(vp),vi,vs
elif tok == 'P':
vtx,vs,vp,vi = add(vtx,vp),vi,negate(vs),vp
else:
raise Exception("Bad token")
points.append(vtx)
return points
</code></pre></div></div>
<p>N.B. the tokens are reversed in <code class="language-plaintext highlighter-rouge">to_points</code> because Python pops elements from the end of a list, not the beginning.</p>
<h1 id="drawing-the-curve">Drawing the curve</h1>
<p>We could choose a different lattice \(\Lambda = \langle v_s, v_p \rangle\) and still run the above code (starting with \(vtx = (0,0),\) say).
The resulting points are the vertices of a PL curve affinely equivalent to the desired sequence in \(\Lambda^3,\) via the unique linear map \(v_s,v_p \mapsto v_s^3,v_p^3.\)
Another way to look at this is that we generate first the (integer) coordinates of the points with respect to the basis \((v_s^3, v_p^3),\) then determine their Cartesian coordinates.</p>
<p>Finally, we would like all this to fit on a U.S. letter page.
This is not difficult to organize.
So, the following code, together with the code above, generates a PostScript file of the third wiggling of the simplest hex loop “PPPPPP,” illustrating what the limit Jordan curve looks like.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def draw(pts):
with open("jordan.ps",'w') as ps:
(x0,y0) = pts[0]
minx,maxx,miny,maxy = x0,x0,y0,y0
for (x,y) in pts[1:]:
minx,maxx = min(minx,x),max(maxx,x)
miny,maxy = min(miny,y),max(maxy,y)
midx = maxx//2+minx//2+(maxx%2+minx%2)//2
midy = maxy//2+miny//2+(maxy%2+miny%2)//2
rngx = maxx-minx
rngy = maxy-miny
ps.write("%!\n")
# Fit US Letter size page to curve
ps.write("72 dup scale\n") # Length in inches
ps.write("{0} {1} translate\n".format(8.5/2,11/2))
scale = min(8.5/rngx,11/rngy)
ps.write("{0} dup scale\n".format(scale)) # Length in coords
ps.write(".95 dup scale\n") # Margin room
# Draw curve
ps.write("newpath\n")
ps.write("{0} {1} moveto\n".format(x0-midx,y0-midy))
for (x,y) in pts[1:]:
ps.write("{0} {1} lineto\n".format(x-midx,y-midy))
ps.write(".111 setlinewidth\nstroke\n")
ps.write("showpage\n")
ps.close()
from math import sqrt
if __name__ == "__main__":
simple = "PPPPPP"
intpts = to_points(wiggle(wiggle(wiggle(simple))))
s32 = sqrt(3)/2
hexpts = [(x-0.5*y,s32*y) for (x,y) in intpts]
draw(hexpts)
</code></pre></div></div>
<p>The ensuing PostScript file is rather large; I converted it to PDF and thence to a PNG file.
Here is what it looks like:</p>
<p><img src="/assets/images/jordan/curves/hex_jordan.png" alt="A nearly wild Jordan curve." /></p>
<p>That is the construction.
It remains to be shown that it has the two desired properties, namely</p>
<ul>
<li>that its limit yields a Jordan curve; and</li>
<li>that Jordan curve intersects no PL curve in a positive finite number of points.</li>
</ul>
<p>Roughly speaking, here are why these properties hold.
This yields a Jordan curve because we leave enough space in the triangles so that the curves limit on an continuous injection, and because a point in a triangle of the curve is only wiggled to “nearby” triangles.
The Jordan curve is crossed finitely by no PL path because in each triangle, every line segment between the blue and red regions from the first triangle post crosses the curve more than once.</p>
<p><img src="/assets/images/jordan/local_wiggle.png" alt="The blue and red regions in a triangle." /></p>
<p>That “explanation” is what G. H. Hardy would call <em>gas.</em>
But it’s all I’m putting down for right now!</p>Robert C. Haraway, IIIThere is a Jordan curve such that every piecewise-linear path from the inside to the outside intersects the curve infinitely many times. I describe how to draw such a curve here, and do so.Jordan curve illustrations: a triangular example2021-03-04T00:00:00+00:002021-03-04T00:00:00+00:00https://bobbycyiii.github.io/2021/03/04/generating-jordan-4<p>There is a Jordan curve such that every piecewise-linear path from the inside to the outside intersects the curve infinitely many times.
I sketch the construction of such a curve here.</p>
<h1 id="triangles-instead">Triangles instead</h1>
<p>Last time I described the very simple class of <em>rectagons.</em>
I also gave a way to represent these curves different as a sequence of movement tokens.
There were three movement tokens: left, right, and straight.
For simplicity’s sake, it would be most convenient only to have two movement tokens.
(Such curves would be arguably more “digital” than the previous curves.)
I was unable to get this to work for a grid of squares.
Parity problems got in the way.
So I ditched the squares and used equilateral triangles instead.</p>
<p>For triangles, there is no straight movement token.
Only Port and Starboard movements are possible.
The question is what the local picture of a wiggling of the curve should be.
Happily the picture will look the same in all triangles.</p>
<p>Note that we don’t want an isotopy of the curve fixing the boundary of the triangle.
No such triangle-local isotopy can give us the sort of Jordan curve we want.
The triangle edges would end up being piecewise-linear paths from the inside to the outside.
Instead we just show the intersection of the triangle with the wiggled curve.</p>
<h1 id="hex-curves-and-turns">Hex curves and turns</h1>
<p>More formally, suppose \(T\) is an equilateral triangle.
Let \(C\) be its centroid.
A <em>bend</em> in \(T\) is the union of two feet from \(C\) perpendicular to two sides of \(T.\)
There are three bends in every triangle.
Suppose \(PP\) is a 6-regular triangulation of the plane by equilateral triangles.
A <em>hex</em> curve subordinate to \(PP\) is a PL curve that is the union of bends in triangles of \(PP.\)</p>
<p>An oriented bend is a <em>turn.</em>
Every triangle has six turns possible in it.
The turns have two orbits under the three rotational symmetries of \(T.\)
One orbit consists of port turns; the other consists of starboard turns.
Up to orientation-preserving isometry, a oriented hex curve is determined by its corresponding sequence of port and starboard symbols “P” and “S,” which we will call its <em>course</em>.</p>
<p>Our aim is to give a function \(f\) from hex curves to hex curves, such that</p>
<ul>
<li>\(f(\gamma)\) is isotopic to \(\gamma;\)</li>
<li>for all hex loops \(\gamma,\) \(\gamma_\infty = \lim_{n\to \infty} f^{\circ n}(\gamma)\) is a Jordan curve; and</li>
<li>for all hex loops \(\gamma,\) every piecewise-linear path between the inside and outside of \(\gamma_\infty\) intersects \(\gamma_\infty\) in infinitely many points.</li>
</ul>
<h1 id="the-turn-local-picture-of-a-wiggling">The turn-local picture of a wiggling</h1>
<p>As with wiggling the rectagons, a wiggled hex curve will no longer be subordinate to its original triangulation.
Instead, the new curve is subordinate to a finer triangulation.
For our choice of \(f,\) the new triangulation has eighty-one triangles per previous triangle, dividing the sides of the triangles into nine equal pieces.
Equivalently, the new triangulation is the old triangulation scaled down by a factor of nine by homothety at a vertex.</p>
<p>The local picture \(\gamma' \cap T\) of the wiggling is the same for both orientations of a bend.
However, it is most convenient to give the local picture via its courses.
This requires orientations.
So I will give the courses (and starting triangles) for the components of the local picture
after wiggling a port turn, and after wiggling a starboard turn.</p>
<p>Suppose we want to wiggle a port turn.
Let \(A\) be the side at which the turn begins, and \(F\) the side at which it ends.
Then the other side \(S\) of the triangle is starboard of the (port) turn, and the vertex is port of the turn.
Nine new triangles are incident to \(A.\)
Order them \(T_0^A,\ldots,T_8^A\) from port to starboard.
Likewise, nine new triangles are incident to \(F.\)
Order them \(T_0^F+,\ldots,T_8^F\) from port to starboard.
The components’ boundary points lie in the triangles \(T_3^{A,F},T_4^{A,F},T_5^{A,F}.\)
To wit, the (three) components of \(f(\gamma) \cap T\) are as follows:</p>
<ul>
<li>\(M\) starts from \(T_3^A\) with course \(PSPSSSSPSPS,\) and thus ends at \(T_4^A;\)</li>
<li>\(Y\) starts from \(T_5^A\) with course \(PSPSPSPPSPS,\) and thus ends at \(T_3^F;\) and finally,</li>
<li>\(C\) starts from \(T_4^F\) with course \(PSPSSPSPSPSPPPPSPSPSPSPPSPS,\) and thus ends at \(T_5^F.\)</li>
</ul>
<p>Here is what that looks like in the triangle of a bend:</p>
<p><img src="/assets/images/jordan/local_wiggle.png" alt="Wiggling the local picture in a triangle." /></p>
<p>Curiously the courses all start and end with the sequence \(PSPS.\)</p>Robert C. Haraway, IIIThere is a Jordan curve such that every piecewise-linear path from the inside to the outside intersects the curve infinitely many times. I sketch the construction of such a curve here.Jordan curve illustrations: better pictures2020-09-18T00:00:00+00:002020-09-18T00:00:00+00:00https://bobbycyiii.github.io/2020/09/18/generating-jordan-3<p>This post has pictures suggesting a better example of a Jordan curve, using the isotopies suggested last time.</p>
<p>Explanations, code, and better pictures and curves coming soon!</p>
<p><img src="/assets/images/jordan/curves/jordan0.svg" alt="" />
<img src="/assets/images/jordan/curves/jordan1.svg" alt="" />
<img src="/assets/images/jordan/curves/jordan2.svg" alt="" />
<img src="/assets/images/jordan/curves/jordan3.png" alt="" /></p>Robert C. Haraway, IIIThis post has pictures suggesting a better example of a Jordan curve, using the isotopies suggested last time.Jordan curve illustrations: idea for a better example2020-07-11T00:00:00+00:002020-07-11T00:00:00+00:00https://bobbycyiii.github.io/2020/07/11/generating-jordan-2<p>There is a Jordan curve such that every piecewise-linear path from the inside to the outside intersects the curve infinitely many times.
We will begin the construction of such a curve here.</p>
<p>The construction is inspired by <a href="https://mathoverflow.net/questions/100025/how-many-times-line-segments-can-intersect-a-jordan-curve">this sketch on MathOverflow</a> by Anton Petrunin.
The basic idea is to keep wobbling a curve locally at ever smaller scales.
Of course, it is very important how and at what scales one wobbles; otherwise one might end up with, for instance, the square-filling pictures from the <a href="/2020/07/03/generating-jordan-1.html">last post</a>.
So let’s fix some definitions and refine the basic idea.</p>
<p>First, we will choose an explicit class of curve to draw.
The simplest interesting curves are the <em>rectagons</em>.<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup></p>
<blockquote>
<p><strong>Definition</strong>:</p>
<p>A simple closed curve \(\gamma: S^1 \to \mathbb{R}^2\) is a <strong>rectagon</strong>
when its image is the union of finitely many horizontal and vertical unit line segments with endpoints in \(\mathbb{Z}^2.\)</p>
</blockquote>
<p><img src="/assets/images/jordan/curves/from_tree.png" alt="A rectagon." /></p>
<p>However, a Jordan curve limit of rectagons will just be another rectagon.
To get the desired limit we will employ appropriately scaled rectagons:</p>
<blockquote>
<p><strong>Definition</strong>:</p>
<p>A simple closed curve \(\gamma: S^1 \to \mathbb{R}^2\) is <strong>digital</strong>
when it is the image of a rectagon under a scaling map \((x,y) \mapsto (2^{-n} x, 2^{-n} y)\) for some \(n \in \mathbb{N}.\)</p>
</blockquote>
<p>Equivalently, a digital curve is a rectagon whose image is the union of finitely many horizontal and vertical line segments with endpoints in \(\mathscr{B} = \mathbb{Z}[1/2]^2,\) the collection of points in the plane whose coordinates have finite binary place-value expansions, hence the name <em>digital.</em></p>
<p>We will construct the desired Jordan curve as a limit of digital curves.
To that end, let us establish explicit representations of digital curves.
One obvious such representation is as sequences of points</p>
\[(0,0), (x_0, 0), (x_0, y_0), \ldots, (x_i, y_i), (x_{i+1}, y_i), (x_{i+1}, y_{i+1}), \ldots, (x_n, y_n), (0, y_n), (0,0)\]
<p>where one coordinate changes between consecutive points, and where for all \(i,\) \(x_i \in \mathscr{B}.\)
This is useful for proving the Jordan curve theorem for digital curves.</p>
<p>Here is another way to represent a digital curve that is more convenient for our purposes.
A digital curve is a rectagon \(\gamma\) scaled by some factor \(2^{-n}.\)
For every point \((x,y) \in \mathbb{Z}^2\) on \(\gamma,\) consider the intersection of \(\gamma\) with the unit square \([x-1/2,x+1/2]\times[y-1/2,y+1/2].\)
Orienting \(\gamma,\) up to rotational symmetry there are only three nonempty types of intersection: Forward, Left, and Right.</p>
<p><img src="/assets/images/jordan/flr.svg" alt="Forward, Left, and Right squares." /></p>
<p>Thus \(\gamma\) may be represented as a sequence of <em>movement tokens</em></p>
\[n_0\, A_0\, n_1\, A_1 \ldots,\]
<p>where each \(A_i\) is either Left or Right, and where a natural number indicates that number of Forward intersections.
This sequence together with the scaling factor \(2^{-n}\) we will call the <strong>intrinsic</strong> representation of the associated digital curve.<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup></p>
<p>Our goal is to wiggle, i.e. <em>isotope,</em> the curve on smaller and smaller scales, but not so much that it is no longer a curve.
I’ll try to articulate the motivation for this construction now.
First, once we multiply by the factor, scaled unit squares could constitute good local pictures at smaller and smaller scales.
We want to isotope the local pictures in these squares in such a way that digital segments from one side of the arc to the other in the square must intersect the arc in multiple points.
To be more precise, in each of the above pictures, the arc cuts the square into two pieces, say Port and Starboard.
After isotopy, we want all digital segments from the Port parts of the square’s boundary to the Starboard parts to intersect the arc multiple times.</p>
<p>Now, the most obvious such digital segments are the sides of the square perpendicular to the arc.
After isotopy, these need to intersect the arc multiple times.
Now, if we’re just going to wiggle the arc a little bit, then we can only introduce an even number of additional intersections of the arc with these sides.
For a simplest possible picture then, we want to have three intersection points with the incident sides.
However, we cannot simply replace the given arc with three parallel arcs; that would create two new curves.
After some trial-and-error,<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup> you could come up with the following pictures as I did:</p>
<p><img src="/assets/images/jordan/wiggle_f.svg" alt="Isotoping near a Forward square." /></p>
<p><img src="/assets/images/jordan/wiggle_l.svg" alt="Isotoping near a Left square." /></p>
<p><img src="/assets/images/jordan/wiggle_r.svg" alt="Isotoping near a Right square." /></p>
<p>If we isotope the green rectagon in the left squares as shown above, we get a digital curve that is a rectagon scaled by \(2^{-3}.\)
One would like to say that the new rectagon is just the old rectagon, but with Forward tokens replaced with a new sequence of tokens, and likewise for Left and Right tokens.
This is technically true, but is misleading, since the replacements depend not only on the tokens, but also on their neighbors.
I will show how I organized this in the next post.</p>
<h2 id="footnotes">Footnotes</h2>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>This terminology is from Hales’s <a href="https://www.jstor.org/stable/27642361">Monthly article</a> on formally proving the Jordan curve theorem. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Après <a href="http://people.eecs.berkeley.edu/~bh/v1ch10/turtle.html">Logo</a>, we might also call this the <em>turtle</em> representation. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>Or after thinking about thin position. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Robert C. Haraway, IIIThere is a Jordan curve such that every piecewise-linear path from the inside to the outside intersects the curve infinitely many times. We will begin the construction of such a curve here.Jordan curve illustrations: attempts2020-07-03T00:00:00+00:002020-07-03T00:00:00+00:00https://bobbycyiii.github.io/2020/07/03/generating-jordan-1<p>The Jordan curve theorem is famously simple to state and tricky to prove.
I want to explain why the Jordan curve theorem ought to be difficult to prove, by showing some pictures of very complicated Jordan curves.</p>
<p>This theorem states the following “obvious fact”:</p>
<blockquote>
<p><strong>Theorem</strong> (Jordan)</p>
<p>A simple closed curve in the plane separate the plane into two components, one bounded and one unbounded.</p>
</blockquote>
<p>Here, a <em>closed curve</em> in the plane is a continuous map \(f: S^1 \to \mathbb{R}^2\) from a circle into the plane.
It is <em>simple</em> when \(f\) is injective—that is, when \(f\) takes distinct points to distinct points.
Simple closed curves in the plane are also called <em>Jordan curves.</em></p>
<p>One source of such curves is simple closed approximations of space-filling curves like the one in the <a href="/2020/06/15/whence-topology-2.html">post on monsters:</a></p>
<p><img src="/assets/images/jordan/curves/from_z_order.svg" alt="A z-order based Jordan curve." /></p>
<p><img src="/assets/images/jordan/curves/from_hilbert.svg" alt="A Jordan curve based on Hilbert curves." /></p>
<p>The usual game to play with Jordan curves is to draw some horrible mess like the above, then pick a point in the middle of it off of the curve, and try to figure out if the point lies in the bounded or unbounded part.
And the usual strategy for winning this game is to draw a ray starting at the given point, and to count how many times the ray intersects the curve.
If it intersects the curve an even number of times, the point is in the unbounded component.</p>
<p>There are at least two problems with this strategy.</p>
<p>The first, as evidenced by the above two curves, is that there might be many intersections with the curve.</p>
<p>The second problem is that the strategy only works when the points of intersection of the ray and the curve are “tame,” in the sense that the local picture around the point can be isotoped to look like the intersection of two lines.</p>
<p>Moreover, even for the simplest possible Jordan curves, this strategy only works “generically” instead of always.
The above curves show that there can be more nongeneric rays than one might want to think about.
The simple picture below shows that the set of nongeneric rays can be dense for a given point.</p>
<p><img src="/assets/images/jordan/curves/crooked.png" alt="A bad point and curve." /></p>
<p>You can get a picture like this as follows.
First, recall the signum function</p>
\[sgn(x) = \begin{cases} -1, & x < 0,\\ 0, & x = 0,\\ 1, & x > 0.\end{cases}\]
<p>Let \(\ell\) be the function of period 1 defined on \([0,1)\) by</p>
\[\ell(x) = \frac{1+sgn(x-1/2)}{2}.\]
<p>It’s the square wave with jumps of \(-1/2\) and \(1/2\) at integers and proper half-integers, respectively, and with minimum value \(0.\)</p>
<p>Finally, define</p>
\[f(x) = \sum_{i \geq 1} 2^{-i} \sum_{0 \leq j < 2^{i-1}} \ell\left(x-\frac{2j+1}{2^i}\right).\]
<p>Or, if you like, for \(x \in [0,1)\) we define \(f(x) = \int_0^x d\mu,\) where \(\mu\) is the measure with a point mass at \((2j+1)/2^i\) of weight \(1/2^i\) for all such dyadic rationals in \([0,1).\)</p>
<p>Then the graph of \(f\) has a jump discontinuity at every dyadic rational.
This remains true when we draw the graph of \(r = 1+f(\theta/(2\pi))\) in polar coordinates.
Plugging the jumps with segments yields a Jordan curve like the one above.
Every ray from the origin at a dyadic fraction of a full turn intersects this curve in a segment of positive length, and the set of such rays is dense.</p>
<p>These curves should give the reader pause.
But in my opinion, they are not sufficiently representative examples.
For instance, in the previous example one can draw from the red dot a small segment downwards, and then draw a segment going northeast and intersecting the curve in its longest segment.
So, even though there is no <em>ray</em> from the red dot intersecting the curve nicely, there is at least a (fairly easy-to-spot) <em>piecewise-linear path</em> from the red dot that intersects the curve nicely.
Moreover, the first examples, although very complicated, do not limit on an “infinitely complicated” curve.
Instead, they limit on a square, which is certainly<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup> not a single curve.
This raises the question of whether there is a Jordan curve \(\Gamma\) such that every piecewise-linear path from the bounded component to the unbounded component intersects \(\Gamma\) infinitely many times.
This would be representative of the complicated, awkward nature of “generic” Jordan curves.
Constructing and drawing a well-motivated<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> such curve is the subject of the next few posts.</p>
<h2 id="footnotes">Footnotes</h2>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>I say “certainly,” but this is also not particularly easy to prove. It comes from what’s called <em>invariance of domain.</em> <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>One can construct such curves as the limit sets of quasi-Fuchsian Kleinian groups, e.g., <a href="http://www.josleys.com/show_image.php?galid=263&imageid=8193">this image</a> of Jos Leys (images 42, 44, 45b–d, 49, 52, 53, 59, and 68 from <a href="http://www.josleys.com/show_gallery.php?galid=263">the same gallery</a> are also good examples). These are certainly important objects, but their motivation would take us far afield. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Robert C. Haraway, IIIThe Jordan curve theorem is famously simple to state and tricky to prove. I want to explain why the Jordan curve theorem ought to be difficult to prove, by showing some pictures of very complicated Jordan curves.Whence topology: main concepts2020-06-26T00:00:00+00:002020-06-26T00:00:00+00:00https://bobbycyiii.github.io/2020/06/26/whence-topology-4<p>Having introduced you to the swamp and the monsters, let me finish by sketching the structures used to make some sense out of it all.</p>
<h2 id="spaces">Spaces</h2>
<p>Very briefly, a <em>(topological) space</em> is a set endowed with a notion of “neighborhood”.
Examples include</p>
<ul>
<li>geometric objects,</li>
<li>families of geometric objects, and</li>
<li>families of functions.</li>
</ul>
<p>The word “space” in this context has nothing whatever to do with three degrees of freedom.<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>
It just means that this set is being regarded as a sort of geometric object in itself, and that the properties of it that we are interested in come from the given notion of “neighborhood.”</p>
<p>The conventions on what constitute an acceptable notion of “neighborhood” are very permissive.
Here they are, with “neighborhood” shortened to “nbd”:</p>
<ul>
<li>Nbds of a point contain that point. That is, if \(N\) is a nbd of \(x\), then \(x \in N\).</li>
<li>Supersets of nbds are nbds too. That is, if \(N\) is a nbd of \(x\) and \(N \subset U\), then \(U\) is a nbd of \(x\).</li>
<li>Finite intersections preserve nbds. That is, if \(M\) and \(N\) are nbds of \(x\), then \(M \cap N\) is a nbd of \(x\).</li>
<li>Nbds in nbds: if \(N\) is a nbd of \(x\), then there is a nbd \(M\) of \(x\) such that \(N\) is a nbd of every point in \(M\).</li>
</ul>
<p>This is not the usual definition, but it is equivalent to it.</p>
<p>As an example, let’s say a subset \(N\) of the plane is a nbd of a point \(p\) when there is an open disc around \(p\) that fits inside \(N\).
This endows the plane with a notion of “neighborhood” satisfying the above axioms.
(The third axiom is the least easy to prove for this example.)</p>
<p><img src="/assets/images/topology_3/plane_third_nbd_axiom.svg" alt="The third axiom for the plane." /></p>
<p>You can generalize this approach to sets with fairly arbitrary notions of distance.
These are called metric spaces, and are among the nicest kinds of topological space.</p>
<p>Even so, it is not at all clear from this definition that one can do much of worth with it at all.
The first indication it might be useful comes from the next main concept.</p>
<h2 id="continuity">Continuity</h2>
<p>Suppose \(X\) and \(Y\) are topological spaces.
A function \(f: X \to Y\) is <em>continuous</em> when it “respects the notion of neighborhood.”
Intuitively we want continuous functions to be those that take “nearby points to nearby points.”</p>
<p>There are at least two approaches to making this rigorous.
One is to define a function to be continuous when it takes “infinitely nearby points to infinitely nearby points.”
The problem with this is that, as far as I know, properly defining “infinitely nearby” takes more mathematical logic than I would like to put on my computational topology blog.</p>
<p>The more traditional approach is not as direct, but still works, and does not require a large up-front logical down payment.
We imagine neighborhoods are the appropriate possible meanings for “nearby.”
Then a function is continuous at \(x\) when, no matter what appropriate meaning we pick for being “nearby \(f(x)\)”, there is an appropriate meaning for “nearby \(x\)” such that \(f\) takes points “nearby \(x\)” to points “nearby \(f(x)\)”.
That is, <strong>\(f\) is continuous at \(x\)</strong> when, for all neighborhoods \(M\) of \(f(x)\), there is a neighborhood \(N\) of \(x\) such that \(f(N) \subset M\).
It’s <strong>continuous</strong> with no further qualifications when it’s continuous at all points in its domain.</p>
<p><img src="/assets/images/topology_3/continuity.svg" alt="Continuity." /></p>
<p>A notion of neighborhood is thus the bare minimum amount of structure needed on sets to tell you what it means for a function from one to the other to be continuous.
Topology is the study of these sets and functions.</p>
<h2 id="equivalence">Equivalence</h2>
<p>Whenever defining a field of mathematical study, it is important to make clear what is meant by two mathematical objects being “basically the same” or “equivalent”.
For instance, in set theory, two sets are “equivalent” when there is a one-to-one, onto function between them, a <em>bijection</em> between them.
In order theory, two ordered sets are considered equivalent when there is an order-preserving bijection from one to the other.
In group theory, two groups are isomorphic when there is a bijective multiplication-preserving map from one group to the other.
In geometry, two metric spaces are isometric when there is a distance-preserving bijection from one space to the other.</p>
<p>In topology, the problem of equivalence is more subtle.
One is tempted to say, following the previous notions, that two spaces are equivalent when there is a continuous bijection from one to the other.
This doesn’t work, for a silly reason.
On every set \(S\), there is a very easy topology to construct, the <em>indiscrete</em> topology.
In the indiscrete topology on \(S\), the only nbd of any point \(x\) is the whole set \(S\).
If \(Y\) is an indiscrete topological space constructed this way, then every function \(f: X \to Y\) is continuous.
If we allowed “continuous bijection” as our notion of equivalence, the whole field would reduce to set theory.</p>
<p>But there is another perspective that does work.
Notice that in set theory, equivalences are the same thing as invertible functions.
The same is true in order theory, group theory, and geometry.
For instance, an isometry from \(X\) to \(Y\) is a distance-preserving function \(f: X \to Y\) that has a distance-preserving inverse \(g: X \to Y\).
Likewise, monotone bijections have monotone inverses, and bijective homomorphisms’ inverses are also homomorphisms.
In these fields it is redundant to say the inverses must lie in the same category of functions, but in topology and other fields it is essential.</p>
<p>So instead, in topology, a <strong>homeomorphism</strong> from \(X\) to \(Y\) is a continuous function \(f: X \to Y\) admitting a <em>continuous</em> inverse \(g: Y \to X\).
The existence of a homeomorphism is the right notion of equivalence, usually written \(\approx.\)
On the whole, it is a fairly weak notion of equivalence.
Circles, triangles, squares, and knots, just to give an example, are all homeomorphic.
So notions of distance, angle, or even straight line are thrown right out of consideration.</p>
<p><img src="/assets/images/topology_3/homeo.svg" alt="The standard homeomorphism." /></p>
<p>It is hard to imagine at first that any properties are left unchanged by <em>arbitrary</em> homeomorphism.
Yet by the very same token, a property is <em>very fundamental indeed</em> if not even an arbitrary homeomorphism can change it.</p>
<h2 id="connectedness">Connectedness</h2>
<p>The easiest such property to prove unchangeable, once one notices it, is connectedness.
For instance, the interval \(I = [0,1]\) should be connected.
In general, we say that a space \(X\) is <em>path-connected</em> when, for any two points \(p,q \in X\), there is a path from \(p\) to \(q\)—that is, a continuous function \(\gamma: I\to X\) such that \(\gamma(0) = p\) and \(\gamma(1) = q\).
Given any continuous map \(\phi: X \to Y\), if there’s a path \(\gamma: [0,1] \to X\) from \(p\) to \(q,\) then \(\phi \circ \gamma\) is a path from \(\phi(p)\) to \(\phi(q)\).</p>
<p><img src="/assets/images/topology_3/path.svg" alt="Image of a path under a map." /></p>
<p>So path-connectedness is a topological invariant.</p>
<h2 id="separation">Separation</h2>
<p>The real numbers satisfy a “betweenness” law: if \(x\) and \(y\) are distinct, then they lie in distinct open intervals.
Not all spaces—not even all <em>useful</em> spaces—satisfy such nice properties.<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup>
The most geometrically familiar such spaces do, though.</p>
<p><img src="/assets/images/topology_3/Hausdorff.svg" alt="Hausdorffness." /></p>
<p>These properties are traditionally called <em>separation axioms.</em></p>
<h2 id="compactness-and-convergence">Compactness and convergence</h2>
<p>This is a subtler concept.
Recall that every bounded sequence of real numbers has a convergent subsequence.
This is a property of real numbers, but we can take a different perspective.
Picking lower and upper bounds \(L\) and \(U\), we could say instead that for every sequence \(A: \{0, 1, 2, \ldots\} \to [L,U],\) there is a convergent subsequence \(A|_S: S \to [L,U],\) where \(S\) is some infinite subset of \(\{0,1,2,\ldots\}.\)
That is, every sequence in \([L, U]\) has a convergent subsequence.
This looks more like a property of intervals of the form \([L,U]\) than a property of real numbers.
This is basically what <strong>compactness</strong> is: compact spaces are those in which one can expect limiting processes to converge upon refinement.</p>
<p><img src="/assets/images/topology_3/filter.svg" alt="A filter converging to a point in a compact space." /></p>
<p>Compact spaces and their relatives are the simplest topological spaces.
There is a gradation of potential horribleness for spaces; compact spaces are near the nice extreme.
If spaces were pets, compact spaces would be goldfish or guinea pigs.</p>
<h2 id="concluding-impressions">Concluding impressions</h2>
<h1 id="for-analysts">For analysts</h1>
<p>Topology is a necessity for analysis.
It is useful for instilling doubt in the seemingly obvious, in “playing for safety” in making claims, and as a way to export geometric intuition to objects that cannot be visualized.
Compactness is very useful—for instance, in stating the Arzelà-Ascoli theorem.</p>
<h1 id="for-geometers">For geometers</h1>
<p>Topology is the groundwork and language necessary for all the constructions you <em>actually</em> want to do—vector fields, Riemannian metrics, flows, etc.
Compactness is very useful—for instance, in proving the Hopf-Rinow theorem.</p>
<h1 id="but-most-of-all">But most of all…</h1>
<p>Topology is the study of invariants of spaces under continuous deformation—the most fundamental properties of a geometric object.
Compactness is—well, you get the picture.</p>
<h2 id="footnotes">Footnotes</h2>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>A better word to use would have been “locale”; this has in fact now been employed for a broader class of objects including spaces. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>For instance, Zariski and Scott topologies usually don’t have this property. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Robert C. Haraway, IIIHaving introduced you to the swamp and the monsters, let me finish by sketching the structures used to make some sense out of it all.Whence topology: monsters2020-06-16T00:00:00+00:002020-06-16T00:00:00+00:00https://bobbycyiii.github.io/2020/06/16/whence-topology-3<p>The swamp of topology has monsters lurking inside.</p>
<h2 id="non-differentiable-functions">Non-differentiable functions</h2>
<p>To begin with, let us recall some monsters from analysis.
Consider Dirichlet’s function</p>
\[\delta(x) = \begin{cases} 1, & x \in \mathbb{Q},\\ 0, & x \notin \mathbb{Q}.\end{cases}\]
<p>This function is not continuous at any point!
So functions may be very discontinuous, depending on how you want to define them.</p>
<p>But continuous functions should at least have tangent lines, right?
Except for absolute-value-type functions, and roots, I guess.
But in general, a continuous function is mostly differentiable, right?</p>
<p>Actually, there are continuous functions that have no tangent lines.
Weierstrass was the first to point out the importance of such a function as a counterexample to naive intuition about functions.
Perhaps the simplest example of such a function is the <a href="https://en.wikipedia.org/wiki/Blancmange_curve">Takagi-Landsberg</a> <em><a href="https://en.wikipedia.org/wiki/Blancmange">blancmange</a></em>.
The basic idea of the blancmange is as follows.
Let \(t\) be your favorite continuous function with, say, period 1, and with a corner (a point of nondifferentiability) at \(1/2\).
For instance, you could let \(t\) be the triangle wave function given by \(t(x) = 1 - |2(x - \lfloor x \rfloor) - 1|.\)
Then the new function \(t_1(x) = t(2x)/2\) is continuous, and has a corner at \(1/4\) and \(3/4\).
Likewise \(t_2(x) = t(4x)/4\) has corners at \(1/8\), \(3/8\), \(5/8\), and \(7/8\).
In general, the function \(t_n(x) = t(2^n x)/2^n\) has a corner at \(k/2^{n+1}\) for all odd \(k\) less than \(2^{n+1}\).</p>
<p><img src="/assets/images/topology_2/blancmange_components.svg" alt="Blancmange components." /></p>
<p>The blancmange function then is the sum of all of these:</p>
\[B(x) = \sum_{n=0}^\infty t_n(x).\]
<p>Each partial sum up to \(t_N\) has at least as many corners as \(t_N\), since the previous functions are differentiable at these numbers.
In fact, the blancmange is continuous, but differentiable nowhere.</p>
<p><img src="/assets/images/topology_2/blancmange.svg" alt="The blancmange." /></p>
<h2 id="the-cantor-set">The Cantor set</h2>
<p>No mention of monsters is complete without the Cantor set.
It is the progenitor of so many monsters, even if it is not monstrous itself.
Very simply, the Cantor set \(K\) is the subset of \([0,1]\) whose expansion in <a href="https://en.wikipedia.org/wiki/Ternary_numeral_system">ternary notation</a> has no digit 1, but only digits 0 and 2.</p>
<p>Clearly we have a map \(s: K \to [0,1]\) given by dividing digits by two then regarding them in binary.
This is obviously a surjection.
Strangely it is also a continuous map.</p>
<h2 id="space-filling-curves">Space-filling curves</h2>
<p>With that analytic warmup, let us introduce some of the classic monsters of topology.</p>
<p>We will start in dimension one.
In 1890 Giuseppe Peano constructed a continuous function \(f: [0,1] \to [0,1]^2\) that is surjective, i.e. that hits every point of the square.
This is a “curve” in the sense that it is a continuous function, or <em>map</em>, from the interval into some other geometric object, in this case the unit square.
Hilbert a year later gave another example, with pictures suggesting how the function operated.</p>
<p>Computationally, the simplest example is probably the <a href="https://en.wikipedia.org/wiki/Z-order_curve"><em>Z-order curve</em></a>, defined as follows.<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>
First, let’s spell out the map \(Z_K: K \to [0,1]\) alluded to above.
If \(x \in K\), write \(x = 0.a_0 a_1 a_2 a_3 \ldots.\)
Since \(x \in K,\) all the digits \(a_i\) are either 0 or 2.
Define \(Z_K(x) = (0.b_0 b_2 \ldots, 0.b_1 b_3 \ldots),\) where \(b_i = a_i/2,\) but interpreted in binary.</p>
<p>For example, \(182/729 = 0.\underline{0}\overline{2}\underline{2}\overline{0}\underline{0}\overline{2}_3\) (i.e. in ternary), so \(Z_K(182/729) = (0.\underline{0}\,\underline{1}\,\underline{0}_2, 0.\overline{1}\,\overline{0}\,\overline{1}_2)\) in binary, i.e. \((1/4, 5/8).\)</p>
<p>To get our curve \(Z: [0,1] \to [0,1]^2,\) just interpolate \(Z_K\) linearly between the values at the nearest elements of \(K.\)</p>
<p>For example, \(191/729 = 0.022102_3\) is the midpoint between \(0.022002_3\) and \(200/729 = 0.022202_3.\)
So \(Z(0.022102_3)\) is the midpoint of the line segment between \(Z_K(0.022002_3) = (0.010_2, 0.101_2)\) and \(Z_K(0.022202_3) = (0.010_2, 0.111_2).\)
That is, \(Z(191/729) = (0.010_2, 0.11_2) = (1/4, 3/4).\)</p>
<p>Actual pictures of square-filling curves are rather unilluminating, as Thurston lamented in his famous <a href="https://www.ams.org/journals/bull/1982-06-03/S0273-0979-1982-15003-0/">Bulletin article</a>, pp. 372–3.
What you want instead are images of “nearby” curves that don’t quite fill it up.
This gives a more instructive picture.
For instance, say we truncated \(K\) to the numbers \(K_n \subset K\) with at most \(2n\) ternary digits.
We could still just as well define \(Z_{K_n}\) and \(Z_n\).
Here is a picture of the curve \(Z_7\) so defined.</p>
<p><img src="/assets/images/topology_2/z_order.svg" alt="Z-order curve of order 4." /></p>
<h2 id="the-topologists-sine-curve">The topologists’ sine curve</h2>
<p>A favorite counterexample is the following subset of the plane:</p>
\[C = (\{0\}\times [0,1]) \cup \{ (x,y)\ |\ y = \sin(2\pi/x), 0 < x < 1\}\]
<p>The graph of the right-hand disjunct looks like this:</p>
<p><img src="/assets/images/topology_2/topo_sine_curve.svg" alt="Sine of 2 pi over x." /></p>
<p>You can see this becomes a mess as it approaches the left-hand disjunct.
\(C\) is connected in the usual topological sense.
However, it is not path-connected.
There is no curve that starts at a point on \(\{0\} \times [0,1]\) and ends on any point in the right-hand part, all the time staying in \(C\).
So connected sets, even connected sets in the plane, can be path-disconnected.</p>
<h2 id="the-horned-sphere">The horned sphere</h2>
<p>The Schoenflies theorem strengthens the Jordan curve theorem, stating</p>
<blockquote>
<p><strong>Theorem</strong> (Schoenflies)</p>
<p>A simple closed curve in the plane separates the plane into a disc and a punctured disc.</p>
</blockquote>
<p>The same is not true for embedded spheres in space.
The usual example is Alexander’s horned sphere.
This comes from a recursive construction on a cylinder.
Consider the subset depicted below of a cylinder.</p>
<p><img src="/assets/images/topology_2/the_horning.svg" alt="The horning." /></p>
<p>It consists of two components that, geometrically, wrap around one another.
Each component \(K\) has two marked discs on it.
There is also an associated cylinder \(C_K\) between the discs (which I have not drawn in the figure).
We have constructed this so that \(C_K\) is not only disjoint from the other component but also disjoint from its cylinder.
Thus we may attach these cylinders, and form two new scaled models of this construction in each of these cylinders.
This begets four new constructions, then eight, and so on.
Attaching the discs of the original cylinder to a ball yields a ball whose boundary is Alexander’s horned sphere.
This sphere does bound a ball on one side, but it does not bound a punctured ball on the other side.</p>
<p>The list of monsters goes on and on, and up, and down.
There are earwig-monsters like the <a href="https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space">Sierpiński space</a>, crawling under your feet ready to bite.
There are unnatural creatures of ordinary size like <a href="https://en.wikipedia.org/wiki/Exotic_sphere">exotic spheres</a> that subvert your expectations.
There are titans like the <a href="https://en.wikipedia.org/wiki/Long_line_(topology)">long line</a> that can stomp any nice conjectures you might want to prove.
The proliferation of counterexamples in topology can make it seem like there is little hope of proving much at all.</p>
<p>Nevertheless, even in the swamp of topology, foundations have been laid that allow one to build structures either to keep the beasts and humidity at bay, or outright capture the beasts and put them on display.
I’ll detail some such foundations next time to conclude this little introduction.</p>
<h2 id="footnotes">Footnotes</h2>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>This example is apparently due originally to Lebesgue, though it is often called a Morton curve in the context of computing science. I like this example because I remember working through Exercise 7.14 from baby Rudin as a freshman. Rudin actually attributes this example to Schoenberg. But if you follow the reference, Schoenberg makes the appropriate attribution to Lebesgue. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Robert C. Haraway, IIIThe swamp of topology has monsters lurking inside.