123147.fb2
Planning a chemical synthesis requires thinking about the chemical formula of the product and choosing reactants which provide the necessary building blocks by one or more of the basic forms of reaction. Stoichiometry allows us to express the reaction in quantitative form. Le Chatelier's Principle is used to qualitatively predict the effect of a change in concentration, pressure or temperature on the equilibrium state (ultimate degree of completion) of a reaction. Equilibrium constants, electromotive potentials and Gibbs free energy data are used to make more quantitative predictions as to the completeness of a reaction.
Basic Forms of Reactions. Combination reactions (A+B-›AB)are most often used to unite elements to make binary compounds (those with just two elements), especially oxides, hydrides, sulfides, nitrides, phosphides and halides. This tends to be most practical when the elements can be cheaply obtained.
Combination reactions are also used to convert oxides to carbonates (by adding carbon dioxide), nitrates (by adding nitrogen oxide), and sulfates (by adding sulfur oxide), or to hydrate (add water) to a compound.
The simplest and most important decomposition reaction (AB-›A+B) is electrolysis, in which a compound made of several ions is dissociated into its component ions. The various combination reactions can also be reversed.
Double displacement reactions (AB+CD -› AD + CB) occur between ionic compounds, but are only useful if the reaction is driven forward by the "disappearance" of one of the products; see Le Chatelier's Principle, below.
A redox reaction is one in which one atom or group gains electrons (reduction) and another loses electrons (oxidation). There are many inorganic compounds which comprise a positively charged metal ion. If the metal ion is reduced to the point that it is electrically neutral, then you have obtained the elemental metal. This is the one of the goals in metallurgy.
If any of the reactants or products in a combination, decomposition, or single replacement (AB + C -› AC + B, or -› CB + A) reaction is an element then the reaction is a redox reaction. A double replacement reaction is a redox reaction if any of the atoms changes its oxidation state (e.g., iron from +2 to +1).
Tables of reduction potentials can be used to predict whether a particular redox reaction will occur spontaneously, or needs to be driven by an applied voltage (see "Electrochemistry").
The most important single replacement reactions are those in which one of the reactants is a free metal or a halogen molecule. The more reactive metal displaces the less reactive one (e.g., copper + silver nitrate -› copper nitrate + silver), the more reactive halogen displaces the less reactive one (e.g., bromine + potassium iodide -› potassium bromide + iodine). The goal may be to make the new salt, to reduce the less reactive metal to elemental form, or both.
We can determine which metal or halogen is more reactive by inspecting a table of reduction potentials; the list of metals, from most active to least, is called the electromotive series.
Stoichiometry. Knowing the chemical formulae of the reactants and products, we can "balance" the equation of a chemical reaction, e.g., know that "x" molecules of compound 1 (#1) react with "y" molecules of #2 to make "m" molecules of #3 and "n" molecules of #4. And that in turn means we don't have to guess how much of compound #1 to add in order to fully react it with #2. And likewise we can calculate the theoretical yield of #3 and #4, given the amounts of #1 and #2 provided.
Le Chatelier's Principle. If a chemical system is in equilibrium, and a variable (pressure, temperature, concentration of reactant or product) is changed, the equilibrium shifts to resist the change. This has a number of interesting implications:
1) if the chemical reaction is chosen so that one of the products is
– insoluble, and thus precipitated out of the solution,
– a gas, and so escapes the solution then the reaction will be driven forward as the system shifts to try to replace the "lost" products.
2) In a reaction of ionic compounds, if one of the products (ion combinations) is a compound which is itself a poor electrolyte (a compound which only minimally dissociates into ions, such as water), then its component ions are "depleted" which drives the reaction forward.
3) the chemist can shift the equilibrium of the reaction forward (toward the products)
– by adding one of the reactants in excess.
– if any of the reactants or products are gases (e.g., hydrogen, oxygen, carbon dioxide, ammonia), and there are more molecules of gas on one side of the reaction than the other, the equilibrium can be shifted in one direction or another by a suitable change in pressure (see Pressure Control, below).
– by a suitable change in temperature (see Temperature Control, below) by "coupling" it to a second reaction-a starting material of which is a product of the first reaction-so the second reaction helps pull the first one forward.
Chemical Equilibrium. Many chemical reactions are reversible, that is, they can proceed in either the forward or reverse directions. If the forward and reverse reaction rates are equal, an equilibrium can occur, in which the reaction is incomplete, but there is no further propensity toward change in the concentrations of the reactants and the products. The equilibrium relationship can be expressed quantitatively as a concentration-dependent ratio which equals an equilibrium constant. (The equilibrium constant is also dependent on temperature and sometimes also on pressure.) Once the equilibrium constant is determined for one set of concentrations of the particular reactants and products, the equilibrium formula can be used to calculate the changes in the concentration of the product if the concentrations of the reactants is changed.
Thermodynamics/Gibbs Free Energy. There are reference books in Grantville (e.g., the CRC Handbook of Chemistry and Physics) which have tables of thermodynamic values for various elements, cations, anions and solids. You can use these tables to predict whether a reaction involving those entities can occur spontaneously.
Rate. Loosely speaking, the equilibrium is the endpoint of a chemical reaction, and rate is how quickly it gets there. For a reaction to be commercially feasible, it must not only have an equilibrium favoring the products, it must have a high enough reaction rate. Unfortunately, the prediction of reaction rate is difficult and at the very least requires a knowledge of the exact reaction mechanism. Reaction rates increase with concentration (more chance for the reactants to collide) and temperature. Reactions of ions in solution tend to be fast. Other reactions are slower, as some (but not all) of the bonds holding the reactants together will need to be broken.
Planning. In general, synthetic strategies depend on either displacing one metal with another which is higher in the electromotive series, or on causing two soluble salts to react to form an insoluble product, a gas, or water. (See appendix table 1-2.)
Electrochemistry
Electrochemistry studies the use of spontaneous chemical reactions to create an electric current (as in a battery) or the use of an applied electrical voltage to force a chemical reaction to occur (as in an electrolytic cell).
If the electromotive potential of a reaction is less than zero, then the reaction won't occur spontaneously. But you can still make it happen by applying electricity. The voltage has to be high enough to counteract the negative potential of the reaction, and the current will determine how much product is produced. The reaction will not be 100% efficient, so you will have to use more current than what is theoretically required.
An electrolytic cell has an electrolyte and two electrodes (cathode and anode). The electrolyte may be a solution or a molten salt; the key point is that it contains mobile ions. An ion is an atom or molecule which has lost one or more electrons giving it a positive charge (cation), or gained one or more electrons, yielding a negative charge (anion). The voltage drives the movement of cations toward the cathode, where they are reduced, and of anions toward the anode, where they are oxidized.
At the anode and cathode, the products may undergo further reaction to form secondary products. In a two compartment diaphragm or membrane cell, some kind of barrier prevents undesired reactions between anode and cathode species. For example, in the chloralkali process, hydroxide ions are allowed to react with sodium ions in the cathode compartment (making caustic soda), but not with chloride ions in the anode compartment. And recombination of sodium and chloride ions is also inhibited.
In 1633, Dr. Phil built a "wet cell" battery with a dilute sulfuric acid electrolyte and a zinc electrode. Offord, "Dr. Phil Zinkens a Bundle" (Grantville Gazette 7). That story doesn't reveal the identity of the second electrode, but it would probably be copper, see Boatright, "So You Want to Do Telecommunications in 1633?" ( Grantville Gazette 2).
Here, we are more concerned with electrolysis, which is the decomposition of a chemical by electricity. Dr. Gribbleflotz experimented with electrolysis of an unspecified salt in Offord and Boatright, "The Dr. Gribbleflotz Chronicles, Part 2: Dr. Phil's Amazing Essence Of Fire Tablets" (Grantville Gazette 7)
In the old time line, water was decomposed into hydrogen and oxygen in 1800; sodium and potassium were isolated by electrolysis of their salts in 1807.
The first electrochemical reaction of industrial importance was probably in the purification of platinum. In 1991, the principal electrochemical products were caustic soda, chlorine, aluminum, copper, zinc, chromium, sodium chlorate, caustic potash, magnesium, sodium, manganese dioxide, permanganates, manganese, perchlorates, and titanium. (KirkOthmer9:125). The most common electrolyte was probably sodium chloride.
Electricity is supplied by power plants as high voltage alternating current, but for electrochemical use, this needs to be rectified into direct current and stepped down by transformers to a lower voltage.
Catalysts
What appears to be a single reaction may occur through a series of steps (addition, elimination, substitution and rearrangement), each with its own molecularity (the number of reacting molecules) and own rate law (a mathematical relationship between the rate of the reaction step and the concentration of the reactants). The slowest step determines the rate of the overall reaction.
Catalysts increase (or decrease, so-called negative catalysts) the rate of a chemical reaction without participating in the net reaction. They have no effect on the equilibrium concentrations of the reactants and products.
Johann Dobereiner discovered that the rate of the conversion of alcohol to acetic acid (1816) or acetic aldehyde (1832) could be increased by conducting the reaction in the presence of platinum wire. He created (1823) a lighter in which the hydrogen flame was produced by the action of sulfuric acid on zinc, in the vicinity of a platinum sponge (EA "Dobereiner"; Jentoft). In 1817, Humphrey Davy studied the effect of wires of different metals on the rate of reaction of coal-gas with oxygen. The term "catalysis" was coined by Jons Jakob Berzelius, who used it to explain additional phenomena, including the rapid decomposition of hydrogen peroxide by metals.
EA "Catalyst" says that "many common catalysts are powders of metals or of metallic compounds," and by way of example mentions that platinum catalyzes the hydrogenation of double bonds. It also indicates that acids can be catalysts; "sulfuric acid catalyzes the isomerization of hydrocarbons."
EA "Platinum" says that for use as a catalyst, platinum is used in powdery ("platinum black", from reduction of platinum chloride) or spongy form, and there is reference to its use in production of nitric acid.
Further "data mining" EA will identify other catalysts, which I have tried to logically group below: metals: palladium, neodymium, samarium, rhenium, lutetium, ruthenium, molybdenum, silver, mercury, nickel, iron, rhodium, a platinum-rhodium alloy (for preparation of hydrocyanic acid from ammonia, methane and air, or preparation of nitric acid or ammonium nitrate), copper, unidentified transition metals, metal oxides: iron oxide (to catalyze the direct combination of nitrogen and hydrogen in the Haber Process, EA "Ammonia"), manganese dioxide (to speed the thermal decomposition of potassium chlorate to produce oxygen, EA "Chemical Reactions"), platinum dioxide (from fusion of chloroplatinic acid with sodium nitrate), copper oxides, chromium zinc oxide (used in methanol production), scandium oxide, cadmium oxide, lead oxide (litharge), acids: hydrobromic acid, chromic acid, hydrogen fluoride, hydrochloric acid (for nitrobenzene), miscellaneous: copper acetate, aluminum chloride, certain organotin compounds, nickel-aluminum sulfide, sodium nitrate (for manufacture of sulfuric acid), sodium ethylate, peroxides, hot alcoholic solution of potassium cyanide, lithium acetate, n-butyllithium, coordination compounds of zirconium, phosphorus pentaflouride, water (!).
EA apparently overlooks the organometallic catalysts, which were rather important in the late twentieth century.
It is important to note that many catalysts are reaction-specific. Hence, there is going to be a lot of educated trial-and-error going on; systematically testing the effect of each of a series of potential catalysts to see if any of them facilitate a reaction of interest.
A good example of this is the screening carried out by Bosch to make the Haber nitrogen fixation process feasible commercially. Haber initially identified osmium and uranium, both of which were quite expensive, as effective catalysts. Bosch set up test reactors, and tested 4,000 different catalysts over five years, finding that an impure iron oxide catalyst was cheap and operable. (McGrayne 66; KirkOthmer5:323).
Just to complicate matters further, modern catalysts aren't necessarily simple materials. Because the catalytic material is expensive, it is usually advantageous to use it in small amounts, and disperse it on a support material with a high surface area. Gamma-alumina is the most popular support. (KirkOthmer 5:347).
There are also catalytic promoters. These are substances which don't act as catalysts themselves, but which potentiate the activity of the "real" catalyst. There are both chemical promoters which change the surface chemistry, and textural promoters which alter the physical characteristics. Alkali metals have been used as chemical promoters.
Catalysts can be deactivated as a result of fouling (they are physically masked by deposited material), poisoning (feed impurities which reduce their catalytic activity), and physical change (e.g., sintering). Catalysts may in turn be regenerated.
The modern catalyst for ammonia synthesis is a combination of iron oxide as the catalyst, aluminum and calcium oxide as textural promoters, and potassium as a chemical promoter.
Some catalysts-common acids, finely divided metals (e.g. platinum), and some metal oxides-can be put to work in the 1632verse in fairly short order. Others are rare materials, or of a complex composition or structure, and it will take years, if not decades, of work to duplicate them.
Temperature Control
Temperature affects both the rate and the completeness of a reaction. A typical rule of thumb is that for every 10њC increase in temperature, the reaction rate will double. The effect of the temperature on the completeness of a reaction depends on whether it is endothermic (needs heat) or exothermic (releases heat). Higher temperatures favor endothermic reactions and hinder exothermic ones.
There are other considerations. Too high a temperature can result in side reactions, including decomposition. So, depending on the reaction, you may want to heat things up, keep the temperature from increasing above a certain point, or bring it below room temperature.
If a reaction is temperature sensitive, then you need a good thermometer. For industrial work, you might prefer a thermostat which controls a heating or cooling device. In 1634, the Essen Instrument Company is manufacturing precision mercury thermometers. (Mackey, "Ounces of Prevention," Grantville Gazette 5). I would expect that simple spirit thermometers are being made, too.
Both heating and cooling processes are slower to start, and stop, when the reaction is on an industrial scale. As the volume increases, the ratio of the heating or cooling surface to the volume decreases.
In the laboratory, if an elevated temperature is needed for a reaction, the chemist will use a gas-burning Bunsen burner. This can reach a temperature close to 900њC. Up-time, natural gas is used, but Dr. Phil has an alcohol burner in 1633. Offord and Boatright, "Dr. Phil's Amazing Essence of Fire Tablets," Grantville Gazette 7).
On the industrial scale, you may be burning some kind of fuel, which heats air or water surrounding the vessel, or passing through tubes in the vessel. Steam distillation falls in this category. Or you may be converting electrical energy into heat energy. Or running two industrial processes alongside each other, one providing heat for the other.
Chemical reactions tend to be more efficient when the reactants are all in the liquid phase. Solids react only at their surfaces, and gases are low in density. If one of the reactants is solid at room temperature, then to put it in the liquid phase, it must be dissolved or melted. And melting requires heat.
In some cases, it is possible to drastically lower the melting point of the substance of interest by adding a second substance, known as a "flux". Sodium, potassium and lead oxides lower the melting point of glass from 1700њ C. to perhaps 900-1200. Aluminum oxide melts at 2054њ C., but it can be dissolved in cryolite, which is molten at a little less than 1000њ C.
You may also be trying to lower the melting point of the waste material. For example, in smelting copper, you may want to make sure that the silica forms a very liquid slag, that the copper can sink through. So iron oxide is added.
Smelting metals typically requires a reducing agent (e.g. carbon) and heat. For tin or lead oxide, a campfire (600-650њC) is good enough, but copper requires a temperature of 700-800 and forgeable iron, 1100њC.
Combustion processes cannot exceed the "adiabatic combustion temperature," which, for combustion in air, is about 2000њ C for natural gas, 2150 for oil and 2200 for coal. The fuel is the source of carbon and the air is the source of oxygen. The limiting temperature is a function of the heating value of the fuel, the specific heat capacity of the fuel and the air (and the combustion products), the ratio of fuel to air, and the air and fuel inlet temperatures (Wikipedia, "Combustion"). Even higher temperatures are achievable with rocket engine fuels/oxidizers.
The practical combustion temperatures for industrial chemistry are much lower than the theoretic limit. It is difficult to achieve complete combustion if there is insufficient air, heat is lost (radiated out; carried away by exhaust gases), and so forth. To ensure complete combustion, it is customary to use an excess of air, but air dilution then reduces the temperature of combustion.
In 1920, a coal furnace could achieve a temperature of 1600њC without a blast, and 1800њC with one. A gas-fired furnace, with hot air, both the gas and air under pressure, could reach about 2000њC. (Marsh, 46). For higher temperatures, you need to heat by means other than combustion.
An electric arc furnace uses an electric current to heat a conductive material. That could be an ionic compound, or a conductive metal. Perhaps the first industrial use of the electric arc furnace was in the production of calcium carbide by heating lime and coke to 2000њC (1888). Electric arc furnaces came to play an important role in small-scale steelmaking.
Another option for sidestepping the practical combustion temperature limit is to use a solar furnace. Temperatures of 3000њC have been achieved by focusing solar radiation.
The higher the temperature our technology will generate, the more options we have for chemical synthesis.
To chill things down, you can put the vessel in ice, an alcohol bath, dry ice (solid carbon dioxide), or in liquid nitrogen. (for availability of CO2 and nitrogen, see part 2, and Huston, "Refrigeration and the 1632 World" (Grantville Gazette 17)).
Atmosphere Control
Some reactions cannot be conducted in the air, because it would react. If so, the air is replaced with an inert gas, like nitrogen or argon.
Or you may need an atmosphere whose pressure is higher or lower than normal. It is important to compare the number of gas molecules at the beginning and end of the reaction. If that number decreases (as in ammonia synthesis), increasing the pressure will cause the reaction to shift (per Le Chatelier's Principle) in favor of reducing the pressure, which means in favor of fewer gas molecules, and thus in the forward direction. On the other hand, if the number of gas molecules is increased by the forward reaction, then you want to conduct the reaction under lower-than-normal pressure.
To change the pressure, you need two things: a pump, and a vessel with walls strong enough to withstand the pressures generated.
Vacuums may be needed to pull out a gaseous product (to drive a chemical reaction), or to lower the boiling points of the compounds in an organic residue (as in vacuum distillation). Vacuum pumps have been scavenged from refrigerators. (Gorg Huff, "Other People's Money," Grantville Gazette 3)
Elevated pressure also may be used to keep the reactants in the liquid phase, or to facilitate a gas phase reaction. In the mid-nineteenth century, autoclaves were built which could achieve pressures of 725-1150 psi (14.7 psi is normal atmospheric pressure). A 1901 ammonia synthesis used a 1450 psi autoclave. In the early twentieth century, large-scale continuous feed reactors had been built which could handle 2000-5000 psi. By the 1990s, there were operations using 51,000 psi. (Kirk-Othmer/"High Pressure Technology").
High pressure vessels are typically thick -walled, and composed of gun steels. During the 50s, the preferred alloy was nickel-chromium-molybdenum, and later an alloy which additionally contained vanadium gained favor.
The down-timers' only experience with "pressure vessels" is of a rather specialized nature: cannon barrels. These have to resist the internal pressures generated by the explosion. For a given thickness, bronze is better than cast iron, and the down-timers are familiar with the concept of the "built-up" cannon, in which hot hoops or jackets are fit over the barrel and allowed to cool and shrink.
In 1773-91, Woolwich conducted experiments on muskets, reporting a maximum internal pressure of 2,000 atmospheres. (Ingalls). A Civil War era 15-inch Rodman gun, charged with 130 pounds of black powder, will experience 25,000 psi (1700 atmospheres) pressure. (NPS).
While explosives are not exactly a preferred source of pressure (they're dangerous, and don't lend themselves to continuous processing), Alfred Noble "packed steel tubes with gunopowder or cordite and heated them until they exploded with tremendous force, briefly attaining pressures of 8,000 atmospheres at more than 5,000њC." (Hazen 35).
The up-timers include some steam engine enthusiasts, and a locomotive boiler can be considered a high pressure vessel suitable for continuous processing. Canon is a little vague on the issue, but it appears that there is at least one true locomotive on the main line by September 1633 (Flint, 1633, Chapter 33). That locomotive, of course, is generating high pressure steam. I suspect, based on the nineteenth-century locomotive data which the designers will be studying, that it has a steam pressure in the 75-200 psi range. That's still short of even a nineteenth-century autoclave, but it's a start.
To some extent, it will be possible to compensate for having weaker alloys by increasing the thickness of the vessel wall. However, that increases the expense of the vessel and, if it's externally heated or cooled, it impairs heat transfer. In addition, increasing vessel thickness doesn't address the Achilles' heel(s) of the system: the openings needed in order to add raw materials, withdraw product and perhaps supply or remove heat.
Solvents
Solvents are used as a medium in which the reactants can find each other, as catalysts (to help the reactants make or break bonds), and to control the temperature of the reaction. The traditional solvent for inorganic chemical reactions is water.
If cold water doesn't dissolve a particular salt, you can try hot water, and, if that fails, a dilute or concentrated solution of an acid (hydrochloric, sulfuric, nitric, hydrofluoric, acetic, etc.). If need be, the inorganic chemist may have recourse to pure acids, carbon disulfide, liquid ammonia, liquid sulfur dioxide, alcohol, benzene, chloroform, acetone, ether, and turpentine. CRC provides detailed information on the solubility of inorganic compounds in various solvents.
The choice of solvent can have interesting consequences. Barium chloride is soluble in water, while silver chloride is not. The reverse is true in liquid ammonia. Hence, in water, barium chloride reacts with silver nitrate to form silver chloride and barium nitrate. The reverse reaction is favored in liquid ammonia. (Purcell, 154).
Sometimes, not only do you not want to use water as a solvent, you need to make sure that there isn't even a trace of water present in the reactor. If so, you will use various dehydrating agents to prepare the reactor and the reactants for use.
While water was the most important solvent in inorganic chemistry, it has a lesser role in organic chemistry. Over twenty different organic compounds are used as solvents, including methanol, ethanol, acetone, acetic anhydride, pyridine, chloroform, diethyl ether, and benzene (Bordwell 201). In winter 1633-34, Henri Beaubriand-Levesque uses turpentine and ether as solvents for natural rubber. (Offord, "Letters from France," Grantville Gazette 12).
The "aprotic solvents" (e.g., dimethyl sulfoxide) are especially interesting because they seem to increase the reactivity of the reagents (M amp;B 492). DMSO can be obtained from the lignin of wood (EA/Dimethyl Sulfoxide).
Measurement Apparatus
The weight, and hence the mass, of a chemical is measured in chemistry labs by using an equal-arm balance. This has two pans, one holding the unknown, the other a known weight. EA/Balance says that the key to precision measurements is to use a knife edge as a fulcrum, whereas EB11/Weighting Machines warns that the knife-edges and their bearings must be extremely hard. All else being equal, a long arm balance will be more sensitive than a short arm one. Precautions must be taken vis-a-vis temperature, humidity, vibration (from air currents or through the ground), and other disturbances.
In industry, where the weights involved are much greater, the measurement will probably be with an unequal arm balance ("steelyard"), a spring scale, or a platform scale with multiplying levers. (EA/Weighing Machines).
The volume of a liquid is measured by introducing it into a graduated cylinder of suitable size. The flow rate of a gas can also be measured (suitable examples exist at homes which buy natural gas for heating purposes).
Temperature is measured, of course, by a thermometer. The first thermometers were of the liquid-in-glass type; first water, then alcohol, and finally mercury. The liquid expands as the temperature rises. Sealing the tube was essential to avoiding pressure effects. Mercury is liquid from -39 to 357њC. To measure higher temperatures, a gas-in-tube thermometer can be used. Hydrogen thermometers are used up to 1100њC, and nitrogen to 1550њC.
There are many other principles on which a thermometer can be constructed. The platinum resistance thermometer (1886) has been used to measure temperatures in the -259 to 630њC range.
Gas pressures are measured with a pressure gauge designed to handle a suitable range of pressures. There are hydrostatic gauges (manometers) which observe the movement of a column of mercury in a U-shaped tube, flexible pressure sensors like the 1849 Bourdon tube (coiled tube which expands and causes arm to rotate) or the diaphragm gauge (membrane deforms under differential pressure), and thermal gauges which detect the change in heat conductivity of a gas. A primitive manometer was invented by Torricelli in 1643. In Grantville, we probably have diaphragm barometers in several homes, and the steam buffs have pressure gauges which can work up to probably ten or twenty times atmospheric pressure. pH is a measure of the acidity or basicity of a solution. You can measure it quantitatively with a pH meter; which is really a voltmeter with a glass electrode sensitive to hydrogen ions. There isn't any useful information about them in the encyclopedias, but it might be possible to reverse engineer them, and, if one of the chemists took a course in chemical analysis, they would be described there.
If a pH meter isn't available, then you can estimate pH by using one or more acid-base indicators. Those are chemicals which change color depending on the pH. The oldest indicator, litmus paper, was known to the down-timers. EA/Indicator mentions that a mixture of methyl orange, methyl red, bromothymol blue and phenolphthalein will change color continuously from red to violet as the pH varies from 3 to 10. Several of these indicators are discussed in slightly more detail in EB11/Indicator.
Safety Equipment
The hazards posed by chemicals are fire, explosion, and irritation, burning or poisoning through inhalation of vapor, or skin or eye contact.
Borosilicate glass or stainless steel vessels, goggles, wash stations, specialized fire extinguishers, and fume hoods are all taken for granted in the late twentieth century laboratory but will be quite new to the down-timers.
For further details on hazard control in industrial processes, see Cooper, "Industrial Safety" (part 1 in Grantville Gazette 17 and part 2 in 18).
Separation Processes
Many naturally occurring inorganic chemicals are found together with other chemicals, from which they must be separated.
Chemical processes may also yield a mixture of products. When the reaction is performed, you have to separate the product from whatever else is present. At the very least, there will be solvent. If the reaction didn't go to completion, then there will be some starting materials still around. If your reactants and solvent weren't pure, then you have to worry either about the original contaminants or what they might have been converted into.
Some reactions, by their very nature, create more than one product. For example, there are decomposition reactions, which break a large molecule into two or more smaller ones. And there are many reactions in which there is a "change of partners" (compound AB reacts with CD to form AC and BD, where A, B, C and D represent pieces of the reactants).
Separation of the mixture usually depends on the physical properties that differentiate the desired chemical from the others with which it is associated. But suppose that you need to separate X (desired) from Y (undesired), and you can't do so directly. Well, there are tricks that depend on the different chemical reactivities of X and Y. You could chemically convert X to Z, separate Z from Y, then convert Z back to X. Or convert Y to Z, and separate X from Z or even convert X to Z and Y to W, separate Z and W, then convert Z back to X.
The more common separation processes, and the related physical properties, are:
Distillation/Boiling/Condensation: Boiling point (vapor pressure)
Recrystallization: Solubility of Pure versus Mixed Solutes
Decanting/Filtration: Solubility in a particular solvent, and particle size
Extraction: Difference in solubility between two immiscible liquids
Stripping: Difference in solubility in a liquid and in a gas
Sedimentation/Centrifugation: Density
Magnetic Separation: Magnetism
The down-timers are familiar with simple boiling (distillation), but not with techniques such as fractional distillation and vacuum distillation. I will discuss the more advanced techniques in a forthcoming article on the organic chemical industry. Since the down-timers have only the vaguest concept of gases, they are unaware of the elements that can be collected by the liquefaction of air.
Recrystallization was used by Birringucio in the sixteenth century to purify leached saltpeter. (Bohm). In the simplest form of recrystallization, the crude material is dissolved in a minimum quantity of a single solvent, heated enough to bring it all into solution, and then allowed to cool. The principal component crystallizes out first, in a purer form. I am not sure that the down-timers know about multi-solvent crystallization. In any event, modern chemistry increases the number of solvents from which to choose. We are also now more aware of the importance of initiating the crystallization step by providing a seed crystal or creating a seeding surface.
The down-timers also know that some reactions form precipitates, which can then be separated from the remaining liquid by decanting the latter. And they filtered liquids through felt, paper, and porous stones. (Bolton). However, they only practiced gravity filtration, not vacuum filtration, and their filter materials can be improved upon.
The down-timers have prepared extracts, usually with water, of various plant tissues (and the aforementioned leaching is also a form of extraction). However, they haven't really exploited extraction with organic solvents.
Since the down-timers don't know of any gas other than air, and use air in chemical processes only as an oxidant, they aren't aware of the use of a gas to selectively remove a chemical from a liquid.
Density separation by gravity has been used since antiquity. However, centrifugal separation didn't begin until the nineteenth century (a centrifuge was first used separate cream from milk). A centrifuge artificially achieves a sedimenting force much greater than gravity, and hence can separate materials of different density much faster than gravity can.
The down-timers are barely aware of the existence of magnetism, and they lack powerful magnets. Hence, they haven't performed magnetic separations, e.g., of ferrous from non-ferrous metals in recycling operations.
Scaling Up
In the seventeenth century, there were chemical processes, like dyeing and tanning, which could be called industrial processes. Nonetheless, there was no industrial production of chemicals, with the arguable exception of refining ores to metals.
The first chemical compounds produced in reasonably pure form on a large scale were sulfuric acid (late eighteenth century) and soda ash (early nineteenth century). Hence, the down-time alchemists are not accustomed to operations on an industrial scale.
Nowadays, the scaling up of a chemical process is the work of the chemical engineer. In the nineteenth century, chemists teamed up with mechanical engineers. The emphasis of chemical engineers is on "unit processes"-for example, different types of separation.
There are a variety of process changes that must be made when scaling up from laboratory scale (batch size under a kilogram) to industrial scales (tons of material)(White, 117-18). The most obvious one is that the reaction vessels change from glass to metal, but there are others.
Process development is the redesign of a laboratory process to work on the industrial scale. This development work is done on a "pilot plant" scale, intermediate between the laboratory and industrial scales.
The raw material samples that are run through the pilot plant process are only those that are available, if accepted for production use, in commercial quantities. The idea is to avoid using raw materials that will require synthesis, or extensive purification.
Solvents are chosen, whenever possible, so that they don't present severe fire, explosion or toxicity hazards, and so they are recoverable, in reasonable yield (e.g., at least 85%) for reuse.
Since recovery is incomplete, it is a good idea to find ways of minimizing the amount of solvent needed in the first place.
If expensive liquids are involved in the process, whether as solvents or reactants, mockup studies can be performed. That is, an inexpensive fluid with the right physical properties is used as a surrogate to test flow through the system. (Euzen 16).
Many physical processes are size sensitive because of surface/volume ratio considerations. Heating, cooling or filtering material may take minutes on the lab scale but hours on the industrial scale. Extraction of solute from one liquid to another is also on the slow side. The elongated time scale can cause a variety of problems.
There is a general preference for a short time cycle from beginning to end of the production process, but this can cause other problems. For example, a short time cycle may be achievable only if the temperature is allowed to rise rapidly. A temperature rise that is acceptable on the lab scale may result in a fire or explosion when large quantities are involved. The rate of addition of reactants may need to be reduced to compensate.
Significant byproducts of the reaction need to be identified. If you can obtain samples of these byproducts, you can add them to the product and see how the properties change. In this way, you can determine the tolerance limits to be enforced by quality control personnel on the industrial scale.
Many chemical reactions do not yield a single product, even in theory. Others would do so if the reactants were pure, but the required purity may not be obtainable in the early post-RoF period. Separation processes are chosen so that yield is high; crystallization, if necessary, is preferably the last step, because yields are 90% at best.
Ideally, the byproducts are useful in their own right, and recoverable for sale. For example, Spanish pyrites (iron disulfide) were not only used to make sulfuric acid, they usually contained 3-4% copper, which could be profitably extracted from the cinders. (EB11 "Sulphuric Acid").
The good news is that there are economies of scale. Euzen (9) says, "the capital investment normally required for the transformation of the raw material into a given product varies by the power of 0.7 with the capacity of the unit."
Batch versus continuous. In a batch process, the raw materials are loaded into the reactor, the reaction is carried out to completion, the products are removed, and the reactor is cleaned out, ready to repeat the cycle. In a continuous process, the reactor is (almost) never shut down. As product is pulled out, new raw material is added.
Continuous processes are typically very efficient; they are amenable to production of extremely large volumes at a very low operating cost. In part, that low operating cost is attributable to the relative ease with which a continuous process can be automated.
However, there are a few catches. First, continuous processes typically use equipment specially designed for the process in question. If the demand for the product drops, you have equipment which is going to waste. If there is an emergency demand for a different product, you need to set up a separate (batch) reactor to deal with it.
Second, continuous processes must be much more closely monitored. You need real time, or near real time, surveillance of the levels of all the raw materials and products so that, if you're running a little low on one reactant, you can toss more in. And if the product mix isn't correct, you can try to figure out why, and fix the problem.
Third, and this is related to the first two points, continuous process plants tend to have high start up costs.
Fourth, you are at the mercy of your suppliers (and the transportation infrastructure). If you run out of on one of the reactants because a delivery isn't made, or because the material delivered isn't up to spec, then you may have to shut down the process. Idle equipment "burns" money, it doesn't make money. And with some continuous processes, it is difficult and expensive to "restart." You can alleviate these problems by keeping a large reserve of the raw materials, but even when that is practical (some materials don't store well) it is expensive.
This means that we aren't going to see much in the way of continuous processing during the first decade after RoF.
In parts 2 and 3 we will analyze the prospects for the production of specific elements, molecules and compounds.
Table 1-1: Top Inorganic Chemicals
Sulfuric Acid and Derivatives
Sulfuric Acid* manufacture of sulfates, hydrochloric acid and phosphoric acid; acid catalyst,
Phosphoric Acid rust removal, acidification of foods, phosphate (including fertilizer) manufacture, soft drinks
Aluminum Sulfate mordant, water purification, concrete additive
Limestone Derivatives
Calcium Oxide (Lime)* steel and cement manufacture
Sodium Carbonate (Soda)* glass flux; pH adjustment, electrolyte, water softener
Sodium Silicate (Water Glass) cement, egg preservative, timber preservative, porosity-reducer in concrete, fire protection
Industrial Gases
Nitrogen ammonia production, petroleum recovery, perishables protection
Oxygen desulfurization of steel; manufacture of etheylene oxide; welding, rocket fuel oxidizer, oxygen therapy
Carbon Dioxide pressurized gas, fire control, welding, solvent (as liquid), refrigerant (as solid), reagent
Sodium Chloride Derivatives
Sodium Chloride* production of chlorine, chloride, and sodium compounds
Sodium Hydroxide (Caustic Soda)* strong base in soap, paper, detergent, synthetic fiber manufacture
Chlorine disinfecting water, bleaching paper, production of vinyl chloride plastics and chlorinated organics
Hydrochloric Acid* regeneration of ion exchangers, pickling steel, pH control, production of chlorides and chlorinated organics, including PVC
Ammonia* raw material for making nitric acid, ammonium sulfate, chloramine; refrigerant; fertilizer (as water solution); fuel
Nitric Acid* manufacture of nitrates; oxidizing agent
Ammonium Nitrate fertilizer, oxidizing agent (in explosives)
Ammonium Sulfate fertilizer, preparation of ammonium salts, protein precipitant
Titanium Dioxide white pigment, photocatalyst
Potassium Carbonate (Potash)* soap, glass production; drying agent; fire suppressant
Carbon Black* pigment, tire filler
(Source: Chenier, Survey of Industrial Chemistry, Table 2.1. Uses from Wikipedia.)
Finding Your Way in Another Plane
Written by Kevin H. Evans
More than anything else, air travel has become one of the great indicators of up-time connections. Aircraft and other flying devices show, more than anything else, the influence of up-time technology on the 17th century. Perhaps one of the Hallmark questions that gets asked of people who return from a visit to the USE will be, "did you see a flying machine?".
Indeed, aircraft will be among the first items sought after by governments in the seventeenth century once they know flight is possible. This will rapidly create a situation where many aircraft, perhaps as many as fifty, over the next five or ten years will be flying across Europe. Navigation will become a serious issue. Next time you get a chance, look out the window of an aircraft and look at the ground. Honestly, it all looks the same from the air. Getting from your start point to your destination can be fairly difficult. Especially because there are no maps that reflect the landmarks of the seventeenth century from the air.
There are a few of the up-timers who have been trained in aerial navigation. These experts will be able to pass on some of the knowledge needed to fly safely from point A to point B. Nevertheless the slightest mistake can end up with your aircraft many, many, many miles from your destination. A further concern is that there are very few designated landing sites. Of course, any open field will be suitable for most of our aircraft, but once you land there's no fuel or knowing exactly where you are, and there will be no supplies or ground crew to get you back into the air. It becomes absolutely necessary, therefore, to actually arrive at your desired destination. As a result of this, some form of aircraft navigation aids will become absolutely necessary.
Aircraft navigation aids come in two types. The first type are called landing aids, and have to do with helping the pilot to get his aircraft safely onto the ground without bending it. The second type are in-flight navigation aids. These have to do with helping the pilot find his way from origin to destination during a flight.
Landing aids are anything used by the pilot to guide himself to a safe landing. These devices can be both aircraft- and ground- mounted. Ground-mounted devices are usually lights, especially for nighttime operation. These lights are used to indicate the size of a runway, the end of a runway, and whether or not the aircraft is on the appropriate slope approaching the ground. This appropriate angle of fifty feet in a thousand, or about five percent, is called a glide slope.
Aircraft-mounted equipment involves an electronic mechanism that, by the use of needles in a dial, can indicate to the pilot whether or not the aircraft is approaching on the appropriate glide slope and whether or not the aircraft is approaching the air strip from the correct direction.
The first of the ground mounted systems is called a PAPI or Precision Approach Path Indicator. And there is the similar VASI, or Visual Approach Slope Indicator. The systems are composed of a series of lamps, white on top, and red on the bottom. These are inside a shrouded mounting that limits which lamp is visible to the pilot depending on the angle the device is viewed from.
The old saying is "if you're red you are dead." This is because when the aircraft is at the proper angle, one red light and one white light are visible to the pilot. If the pilot can see two red lights, he is coming in too low and is in danger of hitting the ground. If the pilot sees two white lights aircraft is too high and will not land at the end of the runway. This system is particularly desirable because it is fairly easy to implement..
The next ground mounted landing aid is called an approach lighting system, or ALS. It is more simply referred to as a light rail. These are a series of lamps are mounted on poles in such a manner that the pilot will only see one lamp when he is approaching at the proper angle. Light rails can also be what are called flashers, which means that the lamps themselves flash in sequence so as to appear to float towards the ground..
As long as we're talking about lamps, there are a few other lamps that need to be mentioned. The runway is normally marked by having a line of green lights that indicate the beginning of the safe landing zone of the airstrip. Additionally, many groomed airstrips have side markers to show the edges of the runway. During daylight hours they are usually short posts and painted a contrasting reflected color, at night they are usually red or blue lights.
The last light I want to mention is that of the rotating locator beacon. This is a very bright light, mounted on a tower, that indicates to the pilot where the landing area is. Later on we will be talking about other navigation aids, but these devices only get the aircraft to the general area of the landing zone. The rotating beacon is a visual indicator that will guide the pilot the last few miles to the air strip.
More complicated are the electronic landing aids. These landing aids are composed of three devices. The first device is called the localizer. This is a moderately complicated horizontal antenna mounted at the far end of the runway. Because of the physical placement of the antenna elements, a radio signal is emitted that is composed of a series of lobes radiating from the antenna. These lobes can be detected by the aircraft and can indicate the proper angle of approach to the aircraft.
The second device is called the glide slope. The glide slope is another set of antennas, designed and mounted vertically. These provide a radio signal which can be detected by the aircraft that gives a proper angle for descending to the landing strip.
The third device is called the ILS display. This is a device mounted in the airplane that can detect the radio signals from the glide slope and localizer and indicate to the pilot whether or not the aircraft is making proper approach.
For any of this to work, however, you have to be able to find the airport. Historically, airports were first indicated by a smudge pot. That is, they had a large barrel full of oily rags. This would be set on fire, creating a large smoke cloud to give a visible indication of where the airfield was. Also it is very common to have a wind sock. This device is usually a large cone of brightly colored fabric that is attached to a pole. It is open on both ends, with the more narrow end at the bottom, so as to indicate the wind direction and strength for the pilot looking for a landing site. Direction of course is indicated by the direction the sock is pointing, and the strength is indicated by how much of the stock is fully inflated and standing out.
Everything we've mentioned up until now works just fine as long as you're operating out of one airport. Navigating from one airport to different airports, especially one that is a great distance from your original point, is a very specialized skill.
As we mentioned earlier, if you view the ground from above, one part of it looks very much like another part. In fact the higher you go, the more difficult it is to determine exactly what you're looking at on the ground. Experience and training can help the aerial navigator find his way around, but what every navigator really needs is a map. Aerial navigation maps are not really like any other map used either now or during history. Much of the detail on an aerial navigation map is numbers and letters. The numbers and letters refer to different beacons that have been set up to help aircraft navigate from one point to another point.
Further, while some high points are listed, most land features lower than one thousand feet in altitude are not listed. Rivers, major roads, and large cities are marked on the maps because they are easily discerned from altitude. Nonetheless, much of the information found on a standard map is not included with an air chart. But the map is most important because of the information that is printed next to each of the marked beacons. Each beacon on the map is identified by its position, and by the radio frequency which it uses. Pilots, desiring to navigate, plot each beacon and locate themselves on the map using bearings from those beacons for themselves. This is done by finding the imaginary line to each beacon and extending it backwards towards where your aircraft is. Using two or more beacons, the lines will intersect and show where the aircraft is.
Navigation is further complicated by the air you are flying through. Crosswinds, headwinds and clouds can all interfere with navigation. Many pilots have been blown far off course by a crosswind that they could not even feel while flying. Headwinds can slow an aircraft down so that while the pilot has a instrument stating one speed, in reality the aircraft is going much slower. All of these factors require that the pilot of an aircraft be very careful with his navigation, especially in the seventeenth century, because airports are few and far between.
Creating aerial maps will require that we have the ability to identify on the map exactly where all the transmission towers and radio stations are. This allows us to build a radio direction finder that will assist us in finding our position on the map. The pilot can use a radio direction finder that gives the compass heading to each of the radio stations we can hear.
Navigation is also possible by referencing your position to large known landmarks seen on the ground. Much like the bush-flying techniques now used in Alaska These are marked on your map. While navigating, several other things need to be taken into consideration. Among them is the fact that the air that you are in is normally moving and will push you around in the sky even though you think you're flying in a straight line. Allowance must be made for this either by continuous position checks or by calculating known wind drift as you fly and correcting for it.
Now for the tech stuff:
PAPI and VASI
The lamps themselves can be constructed with large grooved glass lenses which are tinted for color and placed in a sheet metal or a wooden case. Illumination for the lamps can be provided by arc lights, or limelight, or even an intense flame. Certainly standard high wattage electrical lamps could be used, however the construction of these lamps may be difficult for some time.