Count{^{N}S_{p}} = N!/(N – p)!

Where N is assigned by a prior sequence process called "coun{}t".

Where count{} is a sequencing process that assigns a label stored within a standard alphanumeric sequence.

Where a standard alphanumeric sequence stores names labels and or marks in a formal order or "flow", which in addition is formally applied anticlockwise when assigned to a non straight or curved spatial arrangement.

Where a spatial arrangement is an experience of space with all distinctions potential, not actual, and in which such distinctions are assigned by iterative processes of comparison , by which. In imposed sequences the distinctions straight, curved,and clock wise motion sequentsand arise.

The sequence to which count {}is applied is created as count{} is applied, before count{} is applied it is just a spatial arrangement which I define as a Scatter arrangement. Thus a Scatter arrangement is a spatial arrangement experience that is potential, not actual, and actualised in the process called count{}..

I may count beyond my store of names by giving an algorithm for name or mark construction. The algorithm is the essential notion of ad infinitum as far as process is concerned.

S is a mnemonic to remind that the process is about sequence construction

p is a count{} result less than or equal to N which specifies the finiteness of the construction process, and is a mnemonic for points as indicative of points or nodes in the sequence.

Thus, though an ad infinitum algorithm may be given, it is p that controls or sets the start and end of any finite sequence, in conjunction with the formal role of the standard alphanumeric sequence.

Now using this I have been able to explore sequence construction. N is the count{} result of things to be sequenced. They are subjectively objectified in some manner, whether by association with a mark or a sound or a sensation or flavour. This count{} result is used in many other sub processes so almost immediately it applies to several contexts: it is the maximal number of nodes or points or stations in the flow of the sequence, it is the minimum structural provision that must be made to display all the sequences in one place, and it is the maximum amount of choice or the maximum count{} result of the degrees of freedom one has while constructing a sequence.

In contrast p is the constraint on the count{} result of nodes constructed in a sequence. It is a design and a build limit, and an important test parameter to decide when the process is complete.

The two combine to form a constraint on the objects when it comes to choices during the construction stage.

N-P+1 I define as the measure of choice at each point P during construction process. P takes on the values 1,2,3….,p during construction, the choice measure decreases as we go. Thus the freedom of choice decreases during construction.

It is important to realise that the construction process is a sequence and that can be directed by another sequence. Thus the sequence p = 1,2,3,4…..,N-1 completely controls the set up of the process, while P = 1,2,3…..,p controls the construction through thr measure of choice.

How I disposition the resultant sequences is a choice separate from this process as it stands.

So how I include this disposition is important for clarity. The notation as it stands links procedures to a factorial calculation process which gives a count as a result. Well the count actually is of the displayed variety of sequences constructed satisfying the parametric constraints. I have therefore to give a process that Generates these varieties G{}, and then a clear process description of how to do it.

Now to do that rigorously I will have to be explicit about choice, as choice is a rather vague term procedurally. I will replace it by the notion Option, in which the choices are explicitly laid out as options. This means I use the sequence formed or generated in the count{} process thus G{count{Scatter}}, which is meant to represent a generation of varieties of sequences in and by the process of count{}. Count {} returns a value of N, while G{} returns a list ( sequence) of sequences formed in counting to N. remember count{} applies a standard sequence to a scatter arrangement, so a standard sequence is the begetter of all sequences by count{}.

G{} may return a single sequence or a list of sequences when count{} is applied. We may now use that list as an options list, and each sequence as an options sequence.

But count{} is not the only procedure for generating a sequence. For example the Mandelbrot recursive or iterative equation is in fact a procedure for generating a sequence of z_{n+1} =z_{n}^{2} + c_{0}

Markers given initial markers z_{0}, c_{0}. It requires, at another level, a procedure for denoting pairs of count{}, and for performing the factorisation of the pairs by each other to produce a "combined" pair. It also requires a process to combine pairs as adjugates. This is all usually hidden away behind the hand waving term "number field" , but as you can see it is procedurally complex.

Thus G{z_{n+1} =z_{n}^{2} + c_{0}} generates a substantial list of sequences, governed by the internal parameters and the externally imposed G{ count{scatter}} sequence which determines n

As you may see procedures become quite complex very quickly, and the count of the sequence procedure shows that this complexity increases factorially, not arithmetically or geometrically. I need to take some time to get used to the overwhelm in complexity these option increases generate, but I already know that the fractal paradigm, with it's emphasis on iterative processes is the best way to get to grips with it.

How does G relate to S? Serially only specifies the constraints on a sequence, where G does not, and g relies on constraints being implicit within its arguments thus we should write G{S} to generate all the sequences and count{G{S}} = N!/(N-p)!

How does this relate to other sequence generators? It is of course a convolution in which a subjective stance has to be taken as to where to begin to apply the formalisms, and what to include and exclude. Pragmatically with this level of complexity, one cannot but be selective and in some cases making use of analogy. One cannot know everything, but one can build on sound principles and with utilitarian goals, and with a fundamental acceptance of the fractal nature of it all. Thus what may be a scary complex detail at one level, may be a simple rule of procedure at another level. In that sense the Newtonian Method of Fluxions is going to be very handy in dealing with levels of complexity!

For example, no matter how complex the human Genome, the binomial coefficients will always be involved in sequencing, and that is amazing, that binomial coefficients appear at all scales. That is fractals for you. Also, no matter what sequence is generated, it can always be used as an option sequence in some more complex process, or a simpler one.

Most of us will be introduced to sequences through " number". We will be intrigued by Numbervsewuence patterns, but not relate so much to dance sequence patterns, or fil action sequences or other experiential sequences. We may then be introduced to series, which look like huge sums!, but the elements of the sums are in a relationship to each other so that we would be able to write them down as a sequence before we added them. So a series is like all the terms in a sequence added together, particularly if we know a process to generate the sequence before adding the terms.

Why are the called terms? Because the term derives from terminology, the way we notate distinctions.

It is easy to get caught up in the fascination of terms and formulae and sums and sequences and numbers and miss the point. Numbers obscure what is actually happening. The sequences we are studying are sequences of quantities of magnitude, to use a Newtonian concept. The ancient Greeks, particularly Euclid developed a special type of quantity of magnitude called an arithmos . This was a combination of unit quantities of magnitudes called monads. These arithmoi were thus a standardised combinatorial form, a net tht approximated the form or quantity of form thy represented or covered. The important thing was that the form was always identifiable, as a horse or a wine vessel or a pot, but the arithmoi net meant the form could now be counted and associated with a standard count. Thus a sequence of arithmoi was a sequence of countable nets of recognisable forms, not mere " numbers".

The significance of this is that a sequence represented not the number relationship, but the form relationship. Thus if you take a growth of a plant, the sequence of growth is represented by forms at each stage. If one represents those forms by their arithmoi one develops a relationship between countable forms. Studying this countable relationship may reveal additional patterns and insights about the growth of the plant.

Or take an example in proteomics, say the RNA transcriptase molecule copying a DNA instruction in a gene producing a messenger molecule. As that messenger RNA molecule grows a sequence of aritmoi nets can represent the forms. But the nets will take on the folded shapes of the form, and so one could end up with a sequence of platonic forms as the messenger TNA is sequenced. This of course is still countable, but it is also indicative of the applicability of the Platonic theory of forms. Arithmoi as quantities of magnitude as form, are therefore more versatile and more expressive or insightful than mere number.

We mus always recall that the Pythagoreans were said to have used many types of " number"., but in fact they used many types of countable nets called arithmoi.

My generalisation of these structural and countable relationships I called the compass multivector. This was and is an attempt to link the subjective processing centre to every form it processes, as well as to provide a relativistic other reference frame for subjectively objective entities.. These compass multivectors also are sequenced in natural growth and development scenarios.

I have only begun to scratch the surface of powerful combinatorial procedures we now use to mix up space to generate new sequences, to combine sequence terms in series's, and to iterate these sequence generating procedures to transform form and generate Fractal relationships.