To replace the notion of heat with the more redolent notion of electromagnetic "field" disturbance I sketch out a few notions to be worked on
The charge is a unit of repulsion and attraction in a static equilibrium set up involving matter. Thus charge is measured by a weight or force between certain standard materials with "gravity" discounted as much as possible by measuring orthogonality to the general gravitational direction.
However, weight is usually associated with gravitational attraction, a relation to be counteracted by the use of the term force.
Again "mass" is usually used to quantify force, and this is usually determined by weighing. Since gravity is to be discounted, a system which uses a lever to counteract an assumed constant gravitational attraction is to be employed, then "mass" such as it is is a ratio of relative densities when measured this way.
We now recognise and introduce the notion of charged matter, that is matter which is influenced by a force between matter called charge.
Having discounted gravity we may measure this force by the relative quantities of charged matter, and the first type of measurement we can make is relative density of charged matter in a given electrostatic equilibrium situation. This is exactly the analogous measurement of the ratio of mass, and it will later be used to obviate the distinction between mass and charged mass.
For the time being in a constant electrostatic field a charged quantity of matter experiences a measure called charge which is a force. Now Newton was clear between the measure and the combinatorial method of calculation. He was equally clear between the qualitative nature or attribute of a form and the quantity that measured the Effect of that nature. I will be equally clear.
The notion of charged matter is unclear because it mixes the cause with the quantity that measures the effects of such a relationship between matter so charge has to give way to the clearer notion(at least it is clear in today's usage! Not so clear in Newton's day as terms vied for common acceptance) , that is force of attraction and force of repulsion.
Now if I replace charge by force, then the matter involved in the force must now be accounted for. To account for the matter I need a theory of matter, and the particle theory is the one commonly used. I am going to modify that and use a regional theory of matter based on fractally related bounded regions in space. I can then differentiate a condensing region of space from an rarefying region of space by the dynamics of the pattern boundaries.
So now I can define the quantity of space in 2 ways: the quantity of condensing space is that measure formed by conjuncting volume with the density of regional condensing fractal patterns( this is a count per unit volume);
And the quantity of expanding space is that measure the arises from conjuncting volume and the density of regional expanding fractal patterns.
The measure called the force of attraction or expulsion is defined as proportional to the conjunction of the respective quantities of space and the acceleration of that region of space under the electrostatic constant situation.
Once that force is identified and acknowledged I can develop the notion of a " field" as the volume in space over which such a force is experienced, immediately relates field density to regional condensing or expanding fractal density.as measures
Having defined forces of attraction and repulsion, I may now describe gravity in terms of some combination of these two fields as relates strictly to measures. I have 2 measures of the quantity of space, but the same volume of space can hold and combine the 2 types of regional fractal patterns. In doing so we do not increase the density of each pattern, and if the patterns pair exactly , the density count can be made the same by counting pairs not individual regions.
Should the density of regions of space increase when volumes are combined in this way?
Well now we have to move into the realm of the gas laws as Boyle and others examined them. In the gravitational field the combination of gases increased the relative "mass", and this was commonly thought to prove there were more particles per unit volume, thus an increase in density. However this was not the case as some gases required pressure to attain the required volume , while some did not but became hot through a reaction. It turned ot those which reacted could have variable volume effects some suggesting on the face of it a lower density. Thus the combination of these gases revealed that a simple count of particles is not a sufficient explanation for the effects. One thing did remain the same and that was the relative densities thus the relative densities could be conjoined once standardised, even though the particle count could vary. This was called the conservation of matter, but it is more accurately the conservation of relative densities.
Thus the 2 densities of regional space when combined in the identical volume will balance out the two densities in thierry own volume put on the other side of the scales.
Summarising this using the quantity of space measures the quantity of space a plus the quantity of space b is the same as the quantity of space a+b. a+b cannot always be seen as a multiple of a or b, sometimes it is a new compound.
So gravity can be described in terms of this more chemical relation between quantities of space and some resultant acceleration.
Boyles law in its most general form links pressure volume and temperature a new quantity of measure. Now pressure is usually distinguished from force, because the measure is derived in a different context in which variable volume is evident and needed to be accounted for. Well they are different measures of the same cause if you like. The measures do not address the cause, but we can redefine the repulsive and attractive forces as repulsive and attractive pressures by accounting for the surface areas or cross sections for the matter in the field of force. Doing this enables me to see that the pressure of the field is related to the gas laws, and more particularly to a quantity c
Which is volume and temperature conjuncted. This quantit is the quantity of Thermo electromagnetic motion or as I have framed it, the interaction dynamics of the expanding and condensing regional fractal densities.
From a kinematic viewpoint volume conjuncted with temperature is a measure of the amount or quantity of kinetic motion in a volume. Now the quantity of motion is the conjunction of velocity and the quantity of space with that velocity. Newton's third law implies a wave distribution of that velocity within a bounded region, but that is Newton's measure. We now routinely use Leibniz and Huygens measure which is conserved in dynamic situations where Newton' is not: the quantity of space conjoined with the square of the velocity.,
Thus this quantity of motion of Leibniz was found to be conserved and kinematic ally proportional to the volume conjuncted with the temperature. Temperature is thereby some measure of kinetic motion density.
Thus the pressure acts in a volume, and impinges on surfaces, this is proportional to a quantity of space in a volume moving acceleratively,that is kinetically, and that is proportional to a volume with a uniform temperature, that is a uniform kinetic motion density.
These are all the result of the interplay of the regional fractal condensing and expanding densities which combine sometimes mechanically sometimes in chemical reactions which develop temperature and volume variations, as well as repulsive and attractive force variations. These attributes are proportional enough to describe most observable behaviour at all scales
Let's tackle the question about the temperature of the sun. The coronal atmosphere has a higher temperature than the core of the sun. This has a simple explanation in terms of the kinetic motio density. It should translate to a higher kinetic motion density measurement in the corona than in the suns core. One simple contributory factor is greater volume at the coronal atmosphere for velocities to develop and stabilise under accelaerative pressure sources.
Equally one would expect the cores potential motion density to far exceed the coronal potential motion density.
Now I have not defined potential and kinetic motion as yet but they are related to the Huygens measure of the quantity of motion which is a conserved quantity.
Now this leads to a few combinatorial comments. Conserved quantities must be additively conserved, and factoriseable but not separable. Thus volume of a form is a conserved quantity, which means magnitude is a conserved attribute. Density is not a conserved quantity, as we have seen that density is a count of fractal regions per unit volume, and that depends on my operational definition of a fractal region. Thus I could combine 2 regions and count them as one, but I might miss some and get the approximate answer.
However we conjuncture the two quantities and found that using a third party pressure on the two volumes of a gas or on two equal volumes of different gases gave us a conservation of a measure called the quantity of space, even when the two gases were combined in the same volume. This experiment would increase the density per unit volume in one of the volumes, thus leading to the conservation result. However, in some cases the density per unit volume remained the same, and yet the quantity of space measure did not vary, what had varied was what came to be known as the atomic mass, which always involved a chemical reaction. Later nuclear reactions demonstrated the elements themselves also could experience nuclear compositional changes.
It is easier to point this out now due to the weight of empirical evidence, but to suspect that density is not a conserved measure requires some pretty tautological thinking, and a willingness to count particles. Avogadro is the most notable philosopher for suspecting this kind of elemental behaviour, but one wonders if transmutability was a desired goal for Alchemists to the extent that one would accept this as a distinct part of the belief system about divine powers.
When we think of JJ Thompsons sticky pudding analogy of the nucleus, however we see we do not have to attribute even a clear notion of what is going on to any of these pioneers.
Now Newton had established a method of compounding velocities, which using the distinctions of the time meant that "signs" had to be taken into account. That is relative directions of motion according to philosophers of the time, including Descartes. Few acknowledged the Indian source of this notion, because very few liked the notion at all! It was very largely hated. Nevertheless a consistent set of tautological sign rules was adopted from the far eastern influences, even though they were entirely misconstrued.
Now it took Grassmann and Hamilton to establish that Algebra was not a collection of methods and tricks but a fundamental operating system for arithmetic(s) embedded in which was the notion of sign. Although this is not the case , serves to act as a way of advancing Algebra to the position of a serious academic subject, one which someone would pay good money to see thriving in an academic institution! Bearing that in mind then, there is no natural or sacrosanct reason to maintain every aspect of any algebra as being of a fundamental nature without an empirical basis for such a policy. There is no empirical basis for "sign".
Nevertheless it is with sign and in many many ways despite sign that "vector" algebras were developed. Two aspects are required here: vector magnitudes are conserved;vector combinations are not.
The magnitude of vector combinations are of interest, as they can vary relative to relative vector angle, giving a way of recording certain magnitude combinations.
So now we may see that density is analogous to a vector . The magnitude of the density vectors is conserved, but the magnitude of the combined vector sum is not, and varies with the "supposed" relative angle.
Thus the quantity of space is a vector multiple, the quantity of space has its magnitudes conserved, and the resultant density vector for the compound of the densities is adjusted to a supposed relative angle to make the density vector correct, thus reflecting whether I count two regions fully or partially.
So if 2 regions effectively combine into one I adjust the density vector sum to reflect this. If 2 regions apparently annihilate each other, I adjust the density vector to reflect this.
But, you may ask, isn't all this supposed to happen automatically by the mathematical manipulation process? This is where the wool is pulled over ones eyes.
Newton observed that a system of ropes and pulleys and weights adopted certain recognisable geometrical forms. This happened so consistently under the influence of gravity that hr "posited" that is proposed using Euclidean" theorems to describe a method of compounding. This for the most part has proved very satisfactory, but nothing just drops out automatically! The empirical data has to be modelled by a Euclidean derived form. All the adjustments have to be made prior to using the form. Then the form enables a Euclidean set of relstions to be applied to specific questions or issues. Without that adjustment the Euclidean relations and processes do not apply.
Similarly, once we have made adjustments of our model to fit the real situation, we then have to check if the allowable algebraic processes tell us anything we can.t already see in the empirical data, and under what constraints tht insight will apply.
Thus, if I fix the density vector for a compound, does that density vector hold true in all experimental conditions, and what might a product rule do to such a density vector. For example, would a cross product on two density vectors preserve.them or just their magnitude? If the product preserves some relation between density vectors what implications does that have empirically? Can density vectors be recovered from a cross produced pair, or do new ones get recovered and what empirical implication does thst have?
At each stage. The allowable rules have to be tested empirically to see if they have a utility. If they do that does not mean anything more than that the algebra has a utility. However adjusting the empirical data to fit that algebra to get that utility may be more work than just calculating through on your own. Certainly Minkowski thought that putting equations into 4 vector form was just too much like hard work!
The fact is Schroedinger did it and found a very useful combination of symbols, but it takes years to learn how to understand that symbol combination! Is it worth it? Well for some it is financially, and others just nerd out over it, but for most of us definitely not!
These combinatorial sequences and arrangements are just not accessible in the way they were derived. That is not to say they are not accessible! We play with bricks and lego and other trelational toys which regularly allow us to do the same combinatorial processes, but few would recognise. I mean come on, how many of us can do the Rubiks cube? And yet the same combinatorial moves and sequences are involved in that game.
So the quantity of space at the core of the sun is on the face of it very great , based on a single gas model, with immense fregional fractal densities of in general both sorts. This means the irepulsive and attractive pressures will be huge. Setting the density vectors to support each other means this pressure will be immense and explosive. However if I set the density vectors to annihilate each other this means that the core will be "massive "but stable, because the quantity of space vectors are zero, even though the magnitude is not. The quantity of space vectors have their effect in the repulsive and attractive force measures. Thus with this setting the forces would be zero. This means the net quantity of motion would be zero and the Huygens quantity of motion would be zero.
This is clearly not the case. So if I set the density vector to combine the gas a
Regions into one I could still combinatorially end up with 4 regions in dynamic motion. Thus the quantity of space is the same but th quantity of motion has just increased under Huygens measure where under Newtons it may not show much significant difference. However in this situation the attractive and repulsive forces are acting to establish an equilibrium. That is, there are quantities of space of both types to define the forces, but the forces may act to establish an equilibrium. In this case, where the Newton quantity of matter is zero, but the Huygens is not, I would define a potential motion density measure
Thus in this case a huge potential motion exists at the core, entirely dependent on the density vectors acting to promote distinct regions, but balance the forces out between those regions. Evenso this is not a viable dynamically stable state due to the inverse square law, so huge pressure rings are nodded to maintain close proximity. Once that condition weakens the regional motion will become less stable leading to evaporation an condensation waves destabilising the outer core and above. This wave expansion and contraction in this region will result in increasingly greater velocity differentials and a noticeable increase in both measures of the quantity of motion. In this case I would define a kinetic motion density measure.
Now of course this is a simple model, sketched and simplified beyond thst. But even in this state a dynamic stratification is conceivable. Both densities are related to the force vectors which are related to the fractal regional density vectors, and so are vectors by definition. The further away from the core, the greater the spherical volume, and the less action the core has on these volumes of space. So while condensing and expanding are occurring in these volumes, the condensing space is differentiating toward the core and the expanding space is differentiating away from the core leaving greater volume for regions to be accelerated by the repulsive and attractive forces. Thus we have an increasing kinetic energy density as one moves from the core, less hindering frictions and collisions, and generally a more spread out distribution of the types of quantity of space.
Now the force field effects show up in terms of relative velocities. Condensing regions are attracted straight to the core and vector illy spiral in that direction where they meet greater and greater viscosity. Expanding regions are also strongly attracted to the core, but their expansiveness counteracts this attraction, typically taking much longer to spiral into the core. In the meantime, their velocities enable them to also evaporate away from the core as a body, attracted to nearer condensing types of space, the stronger the better.
At some stratification. The interaction between the 2 types of quantities of space is optimal, and chemical compounds are formed. Again, the regional fractal density vector configuration plays a key role in all of this and contributes principally to this stratification.
The connection between pressure in a volume and the kinetic motion density in a volume is clear, if tentative. And the link with Boyles generalised gas laws allows the connection to be made with temperature and kinetic motion density.
This is a simplified and tentative proof of concept to explain the sun's coronal temperature difference. It should be clear that electromagnetism has been roughly included in the fractal regional definitions, but the whole model needs more detailed work.
In passing, the model gives a description of the Vann Allen radiation belts. The electrostatic and magnetic field lines are what need to be identified first, before moving to a dynamic model of the earths magnetic field, and electromagnetic phenomena.
One immediate question is how are the fractal regional density vectors set. This would appear to be as a result of various electromagnetic and electrostatic dipole configurations of the fractal regions. In particular the electro magnetic configuration is empirically known to add an attribute called electromagnetic mass, a concept abandoned in haste with Einsteins theory in the ascendancy but which appears to have returned in the guise of The Higgs Boson!