-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathcshsketches.tex
4198 lines (2980 loc) · 263 KB
/
cshsketches.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass[csh.tex]{subfiles}
\usepackage{amssymb, amsmath}
\usepackage[backend=bibtex,citestyle=authoryear-icomp]{biblatex}
\usepackage[all]{xy}
\usepackage{url}
\newcommand\pushout{\ar@{}[dr]|(.9)\ulcorner}
\newcommand\parr{\ar@<.5ex>[r]\ar@<-.5ex>[r]}
\begin{document}
\section{9/9/19}
https://mathoverflow.net/questions/331121/extending-kan-fibrations-without-using-minimal-fibrations
Szumilo's answer is not constructive enough, in that it still
makes all monic cofibrations. Shulman's solution relies on
\(\kappa\)-representability, which was to other obstacle to
overcome.
Added my draft as another answer. Let's see what happens.
\section{26/8/19}
LaTeX hurts my back.
I now have solved the problem. I think I want to add the
example of modest Kan complexes, and explain why/how the
construction should work there, without claiming victory just
yet.
\section{14/8/19}
The time has come for the puzzle of defining the ascend functor.
For $f\of X\to \Delta^n$ the $n$-dimensional simplices of $Af$ are freely glued
in simplices between p-dimensional simplicies of $f$ and $q$-dimensional
simplicies of $f^{\delta^n_k}$, such that $p+q=n-1$. The whole $f^\delta$ part is
supposed to be located over $k$, at the start of the fibre. So how to break a
morphism in two parts for defining the action?
The morphism $f$ assigns some element $\xi$ of $|Delta^n|$ to each
$x\of \dom f$ and the other part has a dimension $d$. So we have this couple
$\tuplet{\xi,d}$. All that needed is a consist way to break up
$\phi\of[a]\to [\dim(\xi)+y+1]$ into two morphisms $\phi_0\of[a_0]\to\dom[\xi]$
and $\phi_1\of[a_1]\to [y]$, to apply to the two constituent parts.
\begin{align*}
m_0 &= \max(\set{a\of[a]| \phi(a) \of \dom(\xi),\xi(\phi(a)) < k }\cup\set{-1})\\
m_1 &= \max(\set{a\of[a]| y < \phi(a), \xi(\phi(a) - y - 1) < k }\cup\set{-1})\\
a_0 &= a - m_1 + m_0\\% $m_1+1$ maps to $m_0+1$!
a_1 &= m_1 - m_0 - 1\\
\phi_0(i) \textrm{ for } i \leq m_0 &= \phi(i)\\
\phi_0(i) \textrm{ for } i > m_0 &= \phi(i + m_1 - m_0) - y - 1\\
\phi_1(i) &= \phi(i + m_0 + 1) - \phi(m_0 + 1)
\end{align*}
I don't believe I made any off by one errors here, but it is always hard to be
sure.
\paragraph{Conclusion}
I got the proof down. The rest, I believe is context.
\section{29/7/19}
I was working toward an argument that works for internal families of morphism
as long as their domains are small in an internalized sense. GHSS seems to be
using decidable properties of cofibration to make the small object argument
work. I doubt that this is necessary.
"A small object argument for incomplete toposes"
How about this: given $f\of X \to Y$, consider small morphisms $g\of W \to Y$
that factor through $f$. I.e. the category of small objects over $f$.
The one step factorization is these small morphisms is not sufficient,
the colimit must be taken over $n$-fold factorizations for all $n$. Now,
a morphism between small morphisms induces a morphism between these
factorisations as long as the number of factorizations is greater for the target
object. It is a bit underwhelming, but the result is the product category of
the poset $(\nno, \leq)$ and the category of small morphisms over $f$.
That is almost too simple\dots.
Take the colimit here. The hard part is the domain. There is an object of small
morphisms over $f$.
Perhaps it is somehow easier the other way around. For small morphisms, the
transfinite factorisations supposedly just exists.
No!
I want to avoid making the codomain of the generic left morphisms small as well,
but without such an assumption, it is not clear that a colimit exists.
This could be a case of mixed concerns. On one hand, the construction is a
recursive colimit, which may not exist, on the other,
\paragraph{Jump ahead}
Consider $f\of X\to Y$ again. There is a diagram of small fibrations over
$Y$ and that could be useful. Maybe there simply is a limit. There are structure
morphisms to work with. And morphism of small morphisms over $f$ can be judged
to be left morphisms due to the lifting properties.
\paragraph{Inspiration}
I feel up to nothing today. I have no faith I am doing any good, and I am not
having any fun working on this either. Instead, I am once again trying to
remember what my plan was, because despite all these notes, I don't know how to
write this paper.
I came up with a combinatorial proof for the descent of fibrations along
cofibrations. That puzzle motivated me for the last 50 days. Do I do any good to
solve it?
The proof is supposed to show that the universe of types is fibrant. Five years
ago nobody seemed to be aware of a combinatorial approach that actually worked.
That is a reason to think that these facts aren't widely known.
"Fibrant universes--a combinatorial proof"
This was the manageable part.
The good days were: 12/6-19/6. After those, I got distracted. Keep wiritng about
this.
\paragraph{Ascent functor}
After mucking about with encodings, I am not eager to start again, though the
difference should not matter all that much.
A small part of the pain come from the pervasive off by one error.
The real pain, however, is in defining the restriction maps.
Let's just start with $\alpha\of[a]\to[n]$. We want to use an initial segment
of $\alpha_k$ for glueing, so we can just indicate how many elements we want to
use, leading to $(\alpha,i)$ with
$f(\alpha,i):[a - i] \to [a]$ and $g(\alpha,i):[i - 1]\to [a]$ as components.
I have no good names yet. We need a way to split morphisms up, however.
$f(\phi,\alpha, i)$ and $g(\phi,\alpha,i)$.
It is a convolution of sorts.
The sad thing is that the sets of simplices is simple, but the action is
incredibly hard to define.
Idea: $\alpha +_k b$ for adding $b$ elements to the start of the $k$-th fibre
This lets us do:
\[ \sum_{\alpha +k b \of |\Lambda^n_k|} X[\alpha]\times X^{\delta^n_k}[b]\]
So how is the action defined?
Let's solve this puzzle later. At least I have an idea what to do now.
\section{15/7/19}
Nicola Gambino, Christian Sattler, Karol Szumiło and Simon Henry are working on
constructive simplicial homotopy. I assume their constructuction are correct,
because they are exactly the ones I was considering. See:
https://arxiv.org/abs/1905.06281,
https://arxiv.org/abs/1907.05394.
So what is left?
\begin{itemize}
\item There seems to be no consideration of realizability categories.
\item None of the external view I was considering is present, which could mean
my approach is a different way of looking at the same results, or simply
completely different.
\item Completeness of modest sets in the ex/lex completion of assemblies.
\end{itemize}
I have no answer for the coherence properties, but I can do more of the rest.
The main issue are:
\begin{itemize}
\item Do ex/lex completions indeed preserve completeness?
\item Are homotopy categories indeed equivalent and do they preserve
completeness?
\end{itemize}
These are questions about presheaves modulo weak equivalences.
I am unsure about whether the coherence issues pose a problem for the modest
case.
\paragraph{Complete universes}
Rather than looking at maps with a lot of sepcific properties, we are interested
in morphisms $u\of U \to V$ of locally cartesian closed categories
where every exponential $u^I\of U^I\to V^I$ is a pullback of $u$.
Issues:
\begin{itemize}
\item Beck-Chevalley: what does it mean here? Does it hold up?
I think that asking that $u^I$ is a pullback sufficient, and that discretes of
the effective topos simply won't do that.
\item Is it true that discretes aren't a complete universe then?
\item What does moving to a category of internal presheaves do?
\item What does localization on a cofibrantly generated model structure do?
\end{itemize}
\paragraph{Beck Chevalley}
The Beck Chevalley condition seems to require a little more just to formulate,
since we need $\Pi_f \of u^X \to u^Y$ for all $f\of X\to Y$.
Okay, so I know about the fibred product:
$A^f = \set{\tuplet{y\of Y,X_y \to A}}$
This is sufficient. If $u^f$ is a pulback of $u$, then a map
$\Pi_f\of u^X \to u^Y$ is transposed to that one. This also cements completeness
some more. What about Beck-Chevalley?
It would be hard to guarantee that the right adjoints commute on the nose,
but no more than isomorphism is required. It look like:
if $a$ is a pullback of $b$ with $c\of a\to b$, then $\chi_a\of u^a \to u$ is
isomorphic to $\chi_b\circ u^c \of u^b\to u$. Yeah, the morphisms don't have to
commute on the nose, but they should have isomorphic fibres.
This sound pretty automatic, however. I.e. in an lccc, if fibred products are
pullbacks, then Beck-Chevalley holds.
I just realize what is nice about the exact completion: it creates a classifier
of discrete objects. The map is unique, so we no longer have to mess with things
holding only to isomorphism.
There are transpose Beck Chevalley condition. The notions of a large coporduct
of small object is confusing, but in case of modest sets, it makes sense due to
the modest reflection. Yeah, and we see it in the subobject poset as well.
It doesn't simplify anything to work this way.
\paragraph{Discretes}
So what is the problem with discrete objects in the effective topos?
There is something about quotients along equivalence relations that aren't
$\neg\neg$-closed.
Actually, let's work things out. For modest sets in assemblies out complete
universe is given by the bundle of equivalence classes over the set of partial
equivalence relations over the natural numbers. Since each equivalence class is
a set of numbers, we have a set of realizers for each equivalence class.
This is the universe of modest sets.
It is complete in the above sense? The fibred products along arbitrary functors
are products again.
\paragraph{differences between ex/lex and ex/reg: the epic factor}
The ex/lex completion is the homotopy category for the canonical cofibrantly
generated model structure on locally contractible simplices. So what changes for
the ex/reg completion? Probabaly that products over inhabited families of
cofibrant generator suddenly count as well.
The ex/reg completion has extra equivalences between the locally contractible
spaces, with mere existence of paths replacing a specific choice. This weakening
is fatal for completeness in the recursive realizability context, which is good
at throwing a wrench into axiom of choice based arguments.
locally contratible--for lack of better words. $\geq 1$-connected perhaps.
\paragraph{Internal prescheaves}
The ex/lex completion and the simplicial homotopy catgeory combine two steps:
presheaves and cofibrantly generated model structures. So\dots don't presheaves
simply preserve complete universes?
The idea is simple enough: use fibrewise smallness. That should work.
Okay, there is an obvious quirk not to overlook: the homotopy
category allows large set of paths, but those aren't small objects anymore.
The modest reflection sort of solves this riddle: the classical epic part
is split, the rest is recorded in the small structure.
\paragraph{Combinations}
Ex/lex is simply connected simplicial sets with the standard model structure.
If we repeat the simplicial homotopy category construction up here, what do
we wind up with? Convariantly `bisimplicial' assemblies, with a cofibrantly
generated model structure. The result should be the same if we just look
at simplicial assemblies.
\paragraph{coming back up}
The intuition that modest simplicial sets could be a good model for univalence
is supported by the new papers.
\subsection{Simplicial Assemblies}
\begin{itemize}
\item We will be working with internal presheaves in the category of assemblies.
\item Cofibrations are monics with decidable sets of complementary faces--
`Reedy decidable`.
\item Fibrations are `Reedy split`--have the lifting operators.
\end{itemize}
Nice, but how does this help? What would we prove here?
I guess I want simplicial pers to be a model. What are they?
I do need a new plan.
To show that simplicial modest sets are a modest for univalence,
We should not have to do much, since the modest sets are already models for a
powerful type theory. The generic modest fibration is a neat trick, but should
not be too hard. I am stuck with vague complaints of coherences.
\paragraph{CZF} So Gambino et al. work within CZF. A model exists in
the effective topos and should be close to modest sets.
So the value of the paper is that instead of guessing, I now have a roadmap
toward a model structure on assemblies. I also don't need the separate paper on
constructive simplicial homotopy anymore.
\paragraph{new ideas on density}
Not knowing for sure how useful the papers are yet, just let write these
considerations down.
Use small morphisms instead of just morphisms with small domains. The problem
of finding factorizations becomes the problem of finding fibrant replacements
in slice categories.
I was only in the process of demonstrating a factorization system. Gambino et
al. do much more than that, but the applicability of their ideas on the
category of assemblies is not immediately clean, ironacally because of the
reliance on the internal language, precisely what I was trying to get away from.
This it what the papers can do: either give me something to point to as
evidence for my own claims, or a pointer on how to prove useful result myself.
I am mostly just worried about having nothing left to prove.
I suppose the modest simplicial sets are still interesting, but
we need to bring the arguments into to context of the category of assemblies
and show that the coherence issues don't play such a big role there.
\section{1/7/19}
Important but forgot to mention: the notion of small morphism is important,
and small morphisms are closed under composition. Closure under composition
means being small-filtered, giving all small diagrams a cone. Should also help
with composing small left morphisms.
Each morphism has a fully faithful diagram connected to it--its diagram of
fibres--that may sometimes offer a simpler view of other diagrams,
namely as internal functor to such diagrams of fibres. This not only is a good
family of examples, but it could also give an easier statement of the conditions
we need.
\section{30/6/19}
\paragraph{move to toposes}
Conjecture:
Suppose that a $\Pi$-pretopos $\ambient$ with natural number object has a dense
(directed) diagram $D$. Then $\ambient$ is a topos. The coequalizer of domains
and codomains of epics in $D$ is a subobject classifier $\Omega$. The object of
epics $E$ is crucial here--and directedness and important reason it exists.
So, this would be a reason to rewrite on the assumption that the ambient
category is an elementary topos. All the examples are in fact toposes.
Let $m\of X\to Y$ be any monic. Needed are $Y_D$ and $Y_E$ plus morphisms
$p,q\of Y_D\to Y_E$, $r\of Y_D\to D$ and $s\to Y_E \to E$ that commute with
the domain and codomain maps and that somehow classify the monic.
Part of the formula is letting $Y_D$ just be a cover of $Y$ with tuplets
$\tuplet{x\of D_0,y\of D[x],z\of D[x]\to Y}$ and $Y_E$ be commutative diagrams
$e\to m$ where $e$ is a $D$-epic, possibly together with $y\of D[\dom(e)]$, so
for a map $Y_E\to X$. Yes, $Y_E$ needs to cover $X$ and all equivalent pairs of
$Y_D$. Feel rather complicated, but something like
$\tuplet{x\of E,y\of D[\dom(x)], z\of D[\cod(x)]\to X}$, with
$p\tuplet{x,y,z} = \tuplet{\dom(x), y, m\circ z\circ (x\cdot)}$ and
$q\tuplet{x,y,z} = \tuplet{\cod(x), x\cdot y, m\circ z}$.
This ought to do the job and induce the desired
\paragraph{smarter?}
A better way may be: if the object of $D$-epics $E$ exists, then surely the
object of $D$-monics $M$ exists as well. This should acts as a weakly generic
monomorphism however, which is all we need.
Thinking backwards: we can cover each object with 'sums' of $D$-objects. Pulling
back the monics might force us to cover the domain again, but then we still get
a cover for each monomorphism by the generic $D$ morphism.
We need fullness of $D$, but that isn't new is it?
Ultimately, take the epi-mono factorization of the universal $D$ morphism,
then the fibered product of isomoprhism of the fibers of the monic factor,
then the quotient of that equivalence relation. What exactly could we be missing
then?
\paragraph{counter}
I suspect that somehow a subterminal could be so great that no $D$ object
properly covers it, dispite density of $d$. Take $\mathrm{Set}/\omega$. Finite colimits
of representables are candidate for $D$, they surely fit every bill. The whole
density thing is easy to arrange. There is no way to cover the terminal object
without using `large' objects are indexes. This is a topos, but maybe we can fix
that using cardinalities as well. E.g. take functions with finite fibres only.
Only worry is that $D$ would not be internal anymore. Okay, so move to teh next
limit cardinal: subcategory of $\mathrm{Set}/\lambda$ with fibres smaller than $\lambda$.
Not a topos because the subobject classifier is huge. Many more representables,
that sucks. So the exponentials get in the way of a convincing counter here.
\paragraph{Use those exponentials then}
First cover the terminal object with objects of $D$, to sabotage what happened
above. Note that since each object has a unique map to the terminal object, this
simply is the sum of $D$ objects. Call that $C$ and consider $D^C$, or ever
$D_0^{D_0}$, just to get something that should be big enough to cover
a potential subobject classifier.
We arrive at $D_0$ modulo equi-inhabitation as before, as the smallest possible
solution. Full idea: for each monic $m\of X\to Y$, $m$ is covered by a family of
$D$-morphisms $\set{ m_i\of X_i\to Y_i| i\of I}$.
Actually, first cover $Y$, than cover the domain of the pullback of $m$ and use
fullness to argue that this must be a $D$-morphism. Does that work?
The unique lifting properties of epics versus monics are crucial here.
With their help, the pullback of $m$ is already classified and $m$ soon follows
by diagram chasing. There is a map to the monic factor of the universal
$D$-morphism.
\paragraph{pressing questions}
Could all $D$-objects be inhabited? Would it matter if they were?
This is another challenge.
No, we have a universal $D$ morphism, whose codomain is a rather large
combination of $D$ objects. The way in which we get a subobjectclassifier is
not that sensitive to these problems.
\paragraph{Conclusion}
We might as well assume we are working in a topos, because if there is a
suitable dense internal full subcategory, then there probably is a subobject
classifier. All examples I have in mind are toposes. All we have to do is
explain this choice.
\section{23/6/19}
\paragraph{The construction\dots}
Given a dense directed diagram $D$ and family of \emph{generic left morphisms}
$L$ that live in $D$, there is an object $P$ of pushouts of members of $L$.
The following dependent limit tracks finite compositions of these pushouts.
\[\prod_{n\of\nno}\set{f\of[n]\to P|\dom\circ f\circ\delta_0 = \cod\circ f \circ \delta_n}\]
There is a morphism that takes these sequences to their composites and the
image of this morphism $A$ will be the object of objects is a new diagram.
Morphisms between the composed pushouts are themselves combinations of pushouts
and morphisms:
\[\xymatrix{
S_0 \ar[r]^s\ar[d]_{m_0}\pushout & S_1\ar[d]\ar[dr]^{m_1}\\
T_0 \ar@/_/[rr]_t\ar[r] & \bullet \ar[r]^{\of P} & T_1
}\]
Any commutative square $m\of s\to t$ can be factorized as a commutative
triangle following a pushout square. Here the commutative triangle consists of
morphisms of $P$, however.
The new diagram $L'$ of left morphisms gives a factorisation system which
factors morphisms into left and right ones, because this selection of morphisms
doesn't interfere with the lifting property.
\paragraph{target diagram of generators}
So, small objects. The smallness of the domains of the generic left morphisms
means that $\hom(d,\cdot)$ preserves directed colimits, which is the most
mysterious part. Since we can represent the left factor $l(f)$ of a morphism
$f$ as a directed colimit, there are now a number of steps we can take to lift
a generic left morphism $g$ again the right factor $r(f)$.
Firstly consider that the morphism
$\dom(g)\to \dom(r(f)) = \cod(l(f)) = \colim_{l'\to f} (l')$
is a member of $\colim_{l'\to f} \hom(\dom(a),\cod(l'))$.
Secondly, there is an operation that takes this data and generates a new member
of the diagram: take the pushout $g$ and compose with $l'$.
Finally, the finality of $l(f)$ forces $g$ to factor through $r(f)$
A number of properties is desired fo the target diagram of generators and
obtained only through combining different structure, but what we need is:
\begin{itemize}
\item closure under composition of the left morphisms
\item a dense diagram of codomains, with left morphisms pushed out out to all
of them
\item enough morphisms to get a direct diagram
\item not so many morphism that the left lifting property is lost in the
colimit
\end{itemize}
\paragraph{algebraic lifting properties}
When you equip a right morphism with an lifting operator, no guarantees are
given that lifts of compositions of left morphisms will commute. The
factorizations constructed above do give such guarantees. This is just an aside.
\paragraph{paper}
Perhaps I now finally have what I need to write the paper I have been working
on for the past 5 years\dots
\begin{itemize}
\item a plausible story for algebraic factorisation systems in simplicial
assemblies
\item a plausible story for extending fibrations along horns
\item the language of internal diagrams and sheaves
\end{itemize}
Add the motivating example of modests simplicial sets.
\section{22/6/19}
\paragraph{shortly on factorisation systems}
To replace the small object argument, turn the left side of the factorisation
into a filtered colimit. A saturated set of small morphisms is not needed,
but closure under pushouts and compositions is. Care about the morphisms of
the category or the filteredness is going to fail.
\paragraph{what about the algebras?}
Fibrations are algebras already, just not of a monad. The procedure above is
a way to get a free monad, which is the algebraic factorization system. It
doesn't need to consist such structures itself.
I don't think the algebra idea is that helpful.
\paragraph{better approaches}
A finite composition of functions can be encoded as $X \to Y \to [n]$ with
extra structure to get $X_{i+1}\simeq Y_i$ for all $i<n$. The actual composition
still has to be derived. If we moreover know that $X\to Y$ is a pushout of a
coproduct of generic left morphisms, then we are ready. All structure we need
is present.
\section{21/6/19}
I'd like to look into the factorization systems again. Ultimately, it should
combine two ideas: approximating object with dense categories and using
initial algebras or terminal coalgebras, to create factorization systems.
The following concepts are sort of clear:
\begin{itemize}
\item the category of small objects
\item the functors to consider the algebras or coalgebras of
\item what the morphisms between the algebras should look like
\end{itemize}
The rest is a mess.
Proving that a morphism is an acyclic cofibration, requires knowing the
complementary faces plus a strategy for glueing them in. This strategy
could be a coalgebra of sorts, breaking down each
May it would look as follows.
\[\xymatrix{
A\ar[r]^i\ar[d]_a & B\ar@/^/[r]^{c}\ar@/_/[r]|(.4){l(b)}\ar[dr]_{b} & LB\ar[d]^{r(b)}\\
X\ar[rr]_f && Y
}\]
Here $c$ is the coalgebra, but $a$ is also the equalizer of $c$ and $l(b)$, to
ensure that the complement of $A$ won't be send to itself. This only works
because cofibrations are regular monomorphisms. The risk here is that the
coalgebra is not well-founded.
\section{19/6/19}
\paragraph{Two more days of thought}
Let $W = \bigcup_{i\of [n]-\set{x,\xi(y)}} U_i$ and let
$V=\bigcup_{j\of[m]}V$.
For faces $f\of W-U_{\xi(y)}-V$, $y$ is a
fine support point. If $f\of W-U_{\xi(y)}-V$, then it subface
$f-y$ doesn't belong to any $V_j$ with $j\neq y$.
For $f\of U_{\xi(y)}-W-V$, $\beta(f)_i\neq \emptyset$ for any
$i\of [n]-\set{x,\xi(y)}$, hence there are points like $\max\beta$ and
$\min\beta$ to use as supporting points.
\begin{align*}
\max\beta(\xi(y)) &=y\\
\max\beta(i) &= \beta(i,\max(\alpha_{\delta(i)})) \textrm{ if } i\neq \xi(y)\\
\min\beta(\xi(y)) &=y\\
\min\beta(i) &= \beta(i,\min(\alpha_{\delta(i)})) \textrm{ if } i\neq \xi(y)
\end{align*}
The remaining faces of $U_{\xi(y)}\cap W-V^y$ can all be extended with extra
points to become member of these difference sets. I didn't want to use $y$ in
these cases, because we don't want the support point to be removeable and
$y$ is too specific.
I think we have all faces we need now, but there is still a problem.
If $f\of U_{\xi(y)}-W-V$ there is typically no reason why $f-p(f)(i)$
should belong to $U_{\xi(y)}-W-V$ for $i\of [n]-\set{x,\xi(y)}$. The argument
would probably be that we can get $f-p(f)(i)$ from other supporting points,
but only if those faces have been added before\dots
Obviously, $W-U_{\xi(y)}-V$ don't need to use this measure. Also, we have to see
whether dimension actually can be left out completely.
What if we just use $\sum(\beta)$?
\paragraph{strategy}
Suppose we always take $\max\beta$.
The supporting points we get $f-p(f)(i)$ from are always strictly lower. So we
use $\max\beta$ to determine when a face can be added.
in fact $\sum_{i\of [n]-\set{x,\xi(y)}} \beta(i,\max(\alpha_{\delta(i)})$ could
be a replacement for dimension here.
\paragraph{go through proof}
Let $\sigma\tuplet{\alpha,\beta,\gamma} = \sum_{i\of [n]-\set{x,\xi(y)}} \beta(i,\max(\alpha_{\delta(i)})$.
Let $A_d$ be the union of
$(\bigcup_{i<n} U_{\delta^n_x(i)}) \cap (\bigcup_{j<m} V_{\delta^m_y(j)})$ with
all faces for which satisfy:
\begin{enumerate}
\item $\tuplet{\alpha,\beta,\gamma}\of (U_{\xi(y)}-W)\cup(W-U_{\xi(i)}) - V$
Also $f$ contains its supporting point.
This should be part of the first condition.
\item $d > 2\dim(\beta)+\dim\gamma$
\end{enumerate}
Inaccurate language obviously, the simplex $\tuplet{\alpha,\beta,\gamma}$
generates a simplicial set which is a subset of $A_i$.
We need to show a couple of things, one is that the inclusions $A_d\to A_{d+1}$
are acyclic cofibrations, the other that there is a $D$ such that
$A_D = \bigcup_{i<n} U_{\delta^n_x(i)} $.
So for $(W-U_{\xi(i)}-V)$ we simply take $f\circ \delta_y$, which reduces the
dimensions (when we remove the maximal element of a
segment), may even lands us directly in $A_0$. Hence all of these faces are
present except the one that fails to reach $y$.
For $(U_{\xi(i)}-W-V)$ it gets a bit more complicated. Most $f\circ \delta_y$
are included because of reasons outlined above. The problem cases are
$f-\max(\beta(f))$, which remains excluded and $f-\max(\beta(f))(i)$.
These never fall into $V_j$ because $\max(\beta(f))(i)$ itself is still there.
If they don't fall into $W$, then we add the new $\max(\beta(f-y))$ to get a
face with the same dimension but a lower score because points in $\Gamma$ are
cheaper. If they fall into $W$, then maybe we just should add $y$ and be done
with it--that would require making $y$ cheaper as well!
That adds 1 to the dimension, but $\sigma$ will have lost more
($0\of[m]$ is a problem, so perhaps we need to adjust our measure of $\sigma$).
\paragraph{cost model}
Define the cost of $\tuplet{\alpha,\beta,\gamma}$ as twice the number of points
$\beta$ reaches in $[m]-y-\xi_x$ once the number of points in
plus once the number of points in $\xi_x\cup\set{y}$ plus the number of points
$\gamma$ reaches. Now we can say that leaving out or replacing point reduces
cost and that there is a maximum cost.
We need to sharpen our definitions.
All $j\of [m]-y$ occur somewhere either way. $f$ reaches $y$ or
more complicatedly, $f$ reaches all $\xi_i$ except $\xi_{\xi(i)}$ plus an extra
point $p$ which is a combination of maxima.
Finally, after all this time!
\paragraph{consequences}
There are ascend functors $A^n_x:\cat S/\horn_x[n]\to \cat S/\simplex[n]$ and
these preserve acyclic fibrations. They have right adjoints $D^n_x$ which
preserve fibrations, so this is a Quillen adjunction. The unit
$\eta\of\id\to D^n_xA^n_x$ and the counit $\epsilon\of A^n_xD^n_x\to\id$ ought
to be equivalences. This may be another hard to prove fact though. In any case
we are interested in the property that $h\ri D^n_x \simeq \id$, because form
there we get the fibrancy of the universe of modest fibrations\dots
We never got around to the fibrant replacements\dots.
The point of this construction was to avoid minimal fibrations,
which are non constructive. Do I remember why?
Extension instead of `descend', so look for a antonym to replace `ascend':
condensation, reduction, contraction, subtraction, diminution?
\section{17/6/19}
Let's do this again. The set
$\gamma_y = \set{ a\of \alpha_i | \gamma(a,\xi(y))=y }$
can contain mutliple elements and we have to be careful that they don't get in
each others way. Just consider the set of injections into
$\product{i\of[n]-\set{x,\xi(y)}}\xi_i$.
There must be some reason we cannot pick the least element here and just use
that?
Our starting point is:
$(\bigcup_{i<n} U_{\delta^n_x(i)}) \cap (\bigcup_{j<m} V_{\delta^m_y(j)})$
And I think the problem is that these just may have no single point in common.
The intersection of $\bigcap_{i<n} U_{\delta^n_x(i)}$ has everything empty
except $\alpha_k$ and $\alpha_{k+1}$. The $V_j$ do the rest, but that seems to
leave us with a set of points to work with.
The question is, is there any point in $U_i$ that cannot be extended by the
least point of $\product{i\of[n]-\set{x,\xi(y)}}\xi_i$?
The jumping around makes sense if we stick to $y$ as strating point, because
there is an opposing face that causes trouble. Also, faces can get in each
others way, which is a reason to be careful.
The problem is that $U_i\cap V_j$ really do not have any single point in common
with each other, not even the members of to following set:
$\Gamma_y = \set{\gamma\of\product{i<n}\xi_{\delta^n_x(i)}| \gamma(\xi(y)) = y}$
I think $y$ may be a distraction, i.e. we may have to use it to glue a point of
$\Gamma_y$ to a member of $V_j$, but using it to glue in everything just makes
things more complex. I am unsure.
The basic strategy is the glue in faces by dimension. This requires an $p(f)$
for each face $f$ such that subface $f\circ \delta_p$ is not glued in during the
previous stage with all the others. Gluing in every face that contains
a specific point works because that point becomes $p$. That won't work with our
starting point though.
The `downsets' start to make sense: they describe a path to a point we want to
get in first. We have to be specific though. $\gamma$
has to reach every element of $[m]$, otherwise it belongs to some $V_j$.
\paragraph{pushout product}
What we are dealing with is a kind of repeated pushout product:
$\horn_y(\xi_{\xi(y)})\Box\cycle(\xi_i)$
See, $\xi\of \simplex[m]\to\simplex[n]$ breaks up the horn
$\horn_y[m]\to \simplex[m]$ into a list of cycles
$\cycle[\xi_i]\to\simplex[\xi_i]$ and a horn
$\horn_y[\xi_{\xi(y)}]\to \simplex[\xi_{\xi(y)}]$
The inflation takes the product of most of these and glues them to the
original simplex. This should result in a simplex that is stable under
pullback.
\paragraph{can we not ignore?}
I wanted to focus on $\Gamma_y$, but it seems that we run into trouble then.
A $\tuplet{\alpha,\beta,\gamma}$ belongs to the complement when
$\beta$ and $\gamma$ bother reach all points of $[m]$ (except perhaps $y$, but
then we should be able to add $y$ somehwere). $\gamma$ can reach a lot of points
easily, but $\beta$ can compensate for everything that $\gamma$ misses.
So, let's try is this way. We only glue in faces where $\gamma$ has the
following properties:
\begin{enumerate}
\item $\exists a\of\alpha_k.\gamma(a,\xi(y)) = y$
\item $\forall i\of[n]-\set{x,\xi(y)}. \gamma(\min(\alpha_k),i) = \min(\xi(i))$
\item $\forall a\of\alpha_k-\set{\min(\alpha_k)}. b\of\Gamma_y.
\gamma(a-1)\leq b \leq \gamma(a) \to (\gamma(a-1) = b \vee \gamma(a) = b)$
\end{enumerate}
And of course, $\beta_m\neq \emptyset \vee \gamma_m\neq \emptyset$, otherwise
inclusion is not needed.
What I think we need is:
\begin{itemize}
\item a point $p$ in $\Gamma_y$
\item a `dense' $\gamma$ that intersects that point.
\end{itemize}
What does this do? The point $p$ cannot be removed and if we remove any other
point, fuck!
For most faces, the strategy is to add points to $\gamma$ until the conditions
are satsified.
Pick a point $p\of\Gamma_y$. We know it does not belong to $V_p(i)$ and we need
to be careful about this fact.
I think there should be two conditions to ensure $\gamma$ gets full before
$\beta$ does. For the second condition, $\beta$ and $\gamma$ both ought to be
maximal. If we remove an element from $\beta$, the faces should be included
because of the rule for filling out $\gamma$. Removing an element from $\gamma$
leads to immediate problems, however.
Funny thing is, $\beta$ has to miss an entire range $\xi_i$ of $[n]$, otherwise
none of this will work.
\paragraph{the difficulties}
We need a condition, such that if $\gamma$ satisfies it, then
all $\gamma\circ \delta_j$ satisfy it, except for one.
To add to the trouble, whatever the condition is,
all $\gamma$ that do not reach all of $[m]-y$ have to satisfy it.
The latter seems to make this impossible.
Okay, part of the condition can be that $\gamma$ reaches $y$, but that is not
enough. $\gamma$ can reach $y$ multiple times and more than once means more
than one $\gamma\circ \delta_j$ that reaches $\gamma$.
We have to discount specific $\gamma$ that hit $y$ multiple times.
A specific point $p$ with $p(\xi(y))=y$ works better, except that removing any
such specific point could land us in $V_p(i)$ for $i\neq x$ and $i\neq \xi(y)$.
Can it be made that simple: just ask that $p(i)$ are reached from some other
point as well? Adjacent in $y$ perhaps? i.e. $\gamma(a) = p$ and
$\gamma(a',\xi(y))\neq y$, while $\gamma(a',j) = p(j)$ otherwise for some $a'$?
Looking at the examples, we more or less need the opposite.
I mean, we start with the faces that have full $\Gamma_y$ and gradually but
strategically remove elements to get dense $\beta$ in.
\paragraph{suddenly noticed}
We are asking that $f\circ \delta_k\of \bigcup_j V_j$ for all except one point.
This is a very severe condition actually\dots what the hell does it mean?
The removal of each point results in some $j\of [m]-y$ not being reached
anymore, except for the end of this chain. In other words, every point reached
from the end of this chain is reached elsewhere.
More of a clear out operation then.
Could it be like this: there is a $j\of [m]$ such that $\beta \geq j$ and
$\gamma \leq j$? Nope! $\gamma$ cannot be limited that way.
What about $\max_a(\gamma(a,i))<\min_a(\beta(i,a))$?
On one hand, overlap is not optional.
This has been the problem forever. I cannot get a solid grip on this part of the
problem, no matter what I try.
\paragraph{big picture}
Why is this true intuitively?
It is indeed a matter of using the pushout product multiple times.
It is also this idea of having supplied a support point for these
specific purposes.
If we could show that:
$U_{i\neq x}\cap V_{j\neq y} \to U_i$
Was an acyclic cofibration, would that not be enough?
A pushout of a coproduct of acyclic cofibrations,
Isn't that what we get?
Distribution for $U_i$ would work, but there could be disagreement precisely
on tha simplices we need to glue in, as they aren't covered.
But then, if we could show that for $i\neq x$
$\bigcup_{j\neq y} U_i\cap V_j \to U_i$
was an acyclic cofibration, we would be home right?
We went down this road and as long as $i\neq \xi(x)$
we can use $y$ as support to glue everything in.
The problem is $\bigcup_{j\neq y} U_\xi(y)\cap V_j \to U_{\xi(y)}$.
Still, not having the other around might be a blessing here.
I am unsure.
There is no single edge that belongs to all of $U_\xi(y)\cap V_j$,
but there is a system of interconnected edges.
Could we come up with a different cover then, based on $\Gamma_y$?
That would be nice wouldn't it?
But I thought differently.
Let $V'_j = \set{\tuplet{\alpha,\beta,\gamma}| \beta_j= \emptyset}$.
Now see if $V_j \cap U_{\xi(y)} \to V'_j \cap U_{\xi(j)}$ are acyclic
cofibrations. Because $U_{\xi(y)} \subset V_j$ when $\xi(j)=\xi(y)$\dots
we need some help if $\xi(j) = \xi(y)$ means $j=y$.
Also $V_j \cap U_{\xi(y)} \to V'_j \cap U_{\xi(j)}$ aren't necessarily easier at
all.
We know that each $p\of \Gamma_y$ is missing from $V_p(i)$.
What about $w_p\of \bigcup_{j\neq p(\xi(j)) } V_j\cap U_\xi(y)\to U_{\xi(y)}$ as
covering family?
Well, those would be reason to worry about singletons in $\xi$, since we seem to
miss cover there.
I think we may just run into the same trouble we got into with splitting $V_j$
before: no reason to believe that we get a pushout of acyclic cofibrations,
because the pushout won't uniquely specify how to extend each face. I guess
this is why I use graded constructions in the first place.
\paragraph{pushout product}
This is a relevant insight. The morphism $\xi\of \simplex[m]\to\simplex[n]$ cuts
the inclusion $h\of\horn_y[m]\to\simplex[m]$ into a sequence of cofribations,
$h_i\of\cycle(\xi_i)\to \simplex(\xi_i)$ with one acyclic cofibration
$h_{\xi(y)}\of \horn_y(\xi_{\xi(y)})\to\simplex(\xi_{\xi(y)})$. All except $h_x$
are multiplied together and glued back to $h$. Then the whole structure is
pulled back along $\horn_x[n]\to\simplex[n]$.
The product $\prod_{i\neq x} h_x$ is glued at a point that is stable under
the pullback and because it is an acyclic cofibration, the result is one as
well. The rest is more delcate however, because the glueing is non-trivial and
because the pullback definately damages the other side.
Maybe that is the trick: to index the operation by the points left out.
It is the same set however\dots
Ok, there is no pushout to make this product work.
What if we cover $U_{\xi(y)}$ with maximal chains?
It probabaly does not get finer than that.
\section{16/6/19}
A general way to show that an inclusion of simplicial sets $f\of A \to B$ is to
have a subset $C\of B$ of faces, such that for each $c\of C$ and each
$i<\dim(c)$ except one $c\circ \delta^{\dim(c)}_i\of C$. That way, we can glue
in faces dimension by dimension and use that to proof it is an acyclic
cofibration.
We are looking at this inclusion:
\[(\bigcup_{i<n} U_{\delta^n_x(i)}) \cap (\bigcup_{j<m} V_{\delta^m_y(j)}) \to \bigcup_{i<n} U_{\delta^n_x(i)}\]
The property that we want to use is reaching $y$ of $m$, but the faces of
$U_{\xi(y)}$ cannot do that. That is why we need the extra $\gamma$'s.
So we trace out the difference, faces of $U_{\xi(i)}$ that aren't members of
anything else. Firstly, all $z\of [m]-\set y$ have to be represented somewhere,
but because $\xi_{\xi(y)}$ is excluded, all $z \of [m]$ such that
$\xi(z) = \xi(y)$ but $z\neq y$.
I think it is best to snake my way up in here:
$\product{i\of[n-1]}\xi_{\delta^n_x(i)}$. Then at the end, add $y$ to finish
everything off.
$\gamma_y$ can contain multiple components and that is the issue. The potential
maximum is $\prod_{i\neq x,i\neq \xi(y)} \xi_i$ and this we solve using
initial segments.
\section{12/6/19}
\paragraph{A better representation of the inflated simplices}
Let $\xi\of [m]\to[n]$ and let $x\of[n]$. The inflated simplex $S\tuplet{\xi,x}$
for this pair consist of tuples of the following form.
\[\tuplet{
\alpha\of[a]\to[n+1],
\beta\of\product{i\of[n]}(\alpha_{\delta^{n+1}_x(i)} \to \xi_{i}),
\gamma\of\alpha_x \to \product{i\of[n-1]}\xi_{\delta^n_x(i)}
}\]
The simplicial action is now obvious, because the partial functions can simply
be composed. The faces are injective maps, i.e. in
$\tuplet{\alpha,\beta,\gamma}$, $\beta$ and $\gamma$ are injective.
The only missing data is that to be limited to $\horn_x[n]$--there must be an
$i\of[n-1]$ such that $\alpha_{\delta^{n+1}_x(\delta^n_x(i))}=\emptyset$. I keep
calling this family $U_i$:
\[U_i = \set{\tuplet{\alpha,\beta,\gamma}\of S\tuplet{\xi,x}|\alpha_i = \emptyset}\]
For the inflated $\horn_y[m]\to\simplex[m]$ we only get tuples that avoid the
value $y$. There are inclusions
$S\tuplet{\xi\circ \delta^m_j,x}\to S\tuplet{\xi,x}$ and we want to describe the
images, which are $V_j$:
\[V_j = \set{\tuplet{\alpha,\beta,\gamma}\of S\tuplet{\xi,x}|\beta_j=\emptyset,\gamma_j=\emptyset }\]
This ominous fibres over $j$ take advantage of the fact that we ultimately end
up in $[m]$.
There is a third family: $p\of\product{i\of[n-1]}\xi_{\delta^n_x(i)}$. One
demand actually:
\[W_p = \set{\tuplet{\alpha,\beta,\gamma}\of S\tuplet{\xi,x}|
\forall a\of\alpha_x.\beta(x,a)=x,\forall i\of[n-1], a\of\alpha_{\delta^n_x(i)}.\beta(\delta^n_x(i),a)\geq p(i)
}\]
See (28/9/17). We have a different representation that may be easier to work
with, but the proof went like this. To prove the following map is an acyclic
cofibration:
\[(\bigcup_{i<n} U_{\delta^n_x(i)}) \cap (\bigcup_{j<m} V_{\delta^m_y(j)}) \to \bigcup_{i<n} U_{\delta^n_x(i)}\]
Let $A_k$ consist of all less than $k$ dimensional simplices
$s=\tuplet{\alpha,\beta,\gamma}\of\bigcup_{i<n} U_{\delta^n_x(i)}$ that satisfy
one of:
\begin{itemize}
\item $s\of \bigcup_{j<m} V_{\delta^m_y(i)}$--i.e. it is in from the start
\item $\beta$ reaches $y$.
\item $\gamma$ satisfies a special initial or final segment condition.
This condition requires more explanation.
\end{itemize}
\section{14/1/19}
It is difficult to solve the problems for arbitrary categories of generic cofirbations, but for mere families, things are more or less clear now. I mean, I was struggling to understand the morphisms given an arbitrary base category, but I never solved that problem and I don't need to.
\begin{enumerate}
\item Start with a family of \emph{generic cofirbations} and a family of small objects and create a new category of pushouts of generic morphisms with small domains.
\item Create a new diagram whose objects are witnessed by zigzag chains, while its morphisms are limited to specific forms.
\item The density comonad for the latter category is the factorisation system.
\end{enumerate}
Now I think the zig zag chains don't actually help as witnesses. The codomains don't have to be small. So we need something like small joined with a finite number of codomains.
Here is the problem: I have a construction I don't know how to internalize and an internalized structure that doesn't fit into my proof.
\paragraph{What works?}
There is a finite composition $X_0\to\dotsm\to X_n$ starting with a small $X_0$ and a sequence of generic cofibrations $a_0,\dotsc,a_n$ and morphism $f_i\of \dom(a_i)\to X_0$ such that the morphism $X_i\to X_{i+1}$ is the pushout of $a_i$.
This recursive type may not exist in the general sort of category we are looking in. That is the problem. So I am looking for a work around with the family of small objects.
In the motivating example this is not even a problem. The codomains are representable--as in 'could hardly be more finite.' This is the simplest case: there is a dense full subcategory (full diagram?) which contains the domains and codomains of the generic cofibrations.
Witnessing of cofibrance consist of breaing up a morphism into components and showing each of them is a pushout. I believe we don't need retracts or transfinite compositions at this stage.
So, we end up with a diagram of cofibrations and certain simple morphism between them, which has a dense diagram of domains and is directed. Now the density comond provides morphisms which factor as cofirbations followed by fibrations.
Interesting note: the 'full' diagrams and the 'families' are the final and initial object in the category of diagrams with the same objects, which is why we need no distinction.
To finish the paper, we focus on the case where there is a family of generic cofibrations, whose domain and codomains belong to a given dense diagram. Everything else should then kick in to get the desired results.
\section{17/12/18}
What are the morphisms in the category of transfinite zig-zag chains?
\[\xymatrix{
C \ar[r]^a \ar[d]_c & D \ar[r]^b\ar[d]_d & \nno\ar[d]^s\\
C \ar[r]_a \ar[ur] & D \ar[r]_b & \nno
}\]
For starts we could simply consider the straightforward morphisms that preserve all the structure.
The problem with transfinite composistion of cofibrations is that the codomains of the generic cofibrations don't ordinarily have to be small. We run out of small object too soon, which is a problem, because it means we cannot keep the catgeory of generic cofibrations small by keeping the domains small.
The inifitrary chains should be easier to work with than a collection of finite chains of all length, inclusing giving a simpler definition of morphism. On the other hand, we now need more morphisms to keep the factorization clean. Still, I'd say the simplest proposal is good enough. Ignore the zigzags and allow reindexing. Given $\tuplet{a,b,c,d}$ as above and a similar tuplet $\tuplet{a',b',c',d'}$, then a morphism $\tuplet{a,b,c,d}\to\tuplet{a',b',c',d'}$ is a morphism $f\of a \to a'$ with a zero preserving increasing morphism $g\of N\to N$ that commutes with $b$:
\[
\xymatrix{
C\ar[r]^a\ar@{.>}[d]_{f_0} & D \ar[r]^b\ar@{.>}[d]^{f_1} & \nno\ar@{.>}[d]^g\\
C'\ar[r]_{a'} & D'\ar[r]_{b'} & \nno
}\]
So, the zero presevring increasing morphism emphasizes that we care about the result of the infinite composition, but the individual steps aren't important. The zigzag is evidence for the left lifting property and it is unique when cofibrations are monic, but this evidence is ignored by the morphism, as it doesn't matter for the result.
The reason tranfinite compositions can be composed is that we are working inside an exact ambient category $\ambient$. I.e. $D_\infty$ is a quotient of $D$, by an inductively defined equivalence relation based on the chain of morphisms $D_i\to D_{i+1}$. This defines a new diagram if transfinite compositions in the ambient category.
\paragraph{Finite zigzags}
At this point is may be interesting to consider 'terminating compositions'. So from some $i$ on, $a_{i+1} = a_i$. Bringing back the finiteness this way, the smallness starts to pay off more clearly.
Now that we know how to do it in the infinite case, we can start considering zigzags over initial segments of $[n]$ and introduce the same kinds of morphism. This time, everything is nice and finite, however.
Doubt is creeping in again.
It looks like my notion of morphism picks a member of the target to embed into.
Yeah, we are actually ignoring the tranfinite composition itself. We need to treat the transfinite compositions as diagrams on their own.
Take the finite zigzag and reverse the order. The result is a diagram of simplicial objects coming from another diagram of cofibrations. Once we pay attention to the actual compositions, however, the zigzag can be safely ignored. They are just evidence that the whole composition taken together is a cofibration.
\paragraph{Recap}
Given a diagram of generic cofibrations and a dense diagram, we first combine this into a diagram of generic extensions of small objects and then we take tranfinite compositions as above. We (may) then still need to add identities of small objects as cofibrations. The result is a diagram whose density comonad is supposed to be a factorisation system. How would that work?
The problem is that after factoring $f$ as $r(f)\circ l(f)$, $r(f)$ should have the right lifting property. Because any generic cofibration $c$ has a small codomain, any morphism $c\to r(f)$ should factor through some finite--or at least small--approximation of the factorisation. The pushout is simply another such finite approximation.
The zigzags are now in the way aren't they? It is actually harder to work with these structures now.
\paragraph{Relevance}
In the saturated diagram a morphism tells us something about the lifting property. The morphisms point out potential relations between fillers for the cofibrations. At the same time, some morphisms are needed to get a directed category of cofibation te work with. This is the trouble now. If we don't start with a family, but with a diagram of generic cofibrations, we may break connections that this initial digram has. Adding morphism too carelessly can result in a loss of directedness, however.
Everything is still at stake.
\paragraph{Insight}
We allowed pushouts and compositions as morphisms. Then added morphisms from the base diagram to intervene. With the compositions intervening, it is still okay to let gaps fall between the composites, because it follows the same principles.
The challenge is directedness. This is automatic for sums, but parallel pairs must have coequalizers too. That is the hard part. Morphism have to be limited to eliminate parallel pairs that have no coequalizer. However, the generic diagram may contain morphisms that complicate matters.
I should write what I know and perhaps what I need. If I cannot tell if my construction can handle arbitrary diagrams, just do the discrete version. If I cannot extend fibrations constructively, just do it classically. Leave it to other to work out the full idea.
\paragraph{Generic parallels}
I don't really know what to do with these anyway. Meanwhile, the idea of a finite composition of pushouts of generic cofibrations still works well.
\section{16/11/18}
I have no better idea than the zigzag chains for completing the diagram.
Recap: any diagram in the category of arrows can be combined with a diagram of small objects, so the density comonad becomes a factorization of morphisms.
The zigzags solve the problem that all domains are small, but codomains don't have to be. Meanwhile, they don't solve the problem of missing identities and introduce the problem of to define morphisms between them.
\paragraph{Compositions first}
I have been considering this, but it looks like it cannot work out. Suppose we have a sequence of generic cofibrations $a_i$ whose pushouts compose to form a new cofibration. Vital data about this composition is part of the pushouts, especially about what parts of the domain of $a_i$ are covered by the domains of $a_j$ for $j<i$ and which are part of the domain of the full composition.
Perhaps this can be mitigated with the transfinite composition diagrams.
I.e. we have the morphism:
\[\sum_{i<\omega} a_i \of \sum_{i<\omega} dom(a_i)\to \sum_{i<\omega} dom(a_i)\]