## Numbers

I’ve been doing a lot of mathematics – mostly trigonometry and algebra – over the past year or so. In fact, I’ve been doing it for years. And the result is that I’ve become fairly proficient at it.

But today I strayed off the beaten track, and started doing some statistics. I got lots of statistics lectures at university, most of which I didn’t understand very well (or even at all). And it’s been slowly rusting ever since, because I’ve seldom had any data that needed any statistical analysis done on it.

But now that I’ve got 400 completed questionnaires from the ISIS study in front of me, for once I’ve got some data that does need some statistical analysis. So yesterday I dug out an old statistics textbook, and started reading it. Because, when the study gets published it might be helpful if I could say whether it was “statistically significant” at the 95% level, with Confidence Interval included.

A lot of the ISIS questions had 5 optional checkboxes with each question (e.g. “How much more/less do you go to pubs?”), so that people could answer “much more”, “more”, “same”, or “less”, or “much less”. And if I’ve got, say, 16o responses with “same” ticked, 200 with “less” ticked, and 40 with “much less” ticked, I wondered how I might convert words into numbers. And I decided that I’d call “same” 1, and “much less” 0, and “much more” 2, with “less” 0.5, and “more” 1.5. In fact “much less” shouldn’t really be 0, but perhaps 0.1. And to preserved symmetry, “much more” should be 1.9.

Once these numbers had been assigned to the words, it was easy to work out the mean value of all 400 responses as being u = 0.65. And the standard deviation – how much the numbers differed from the mean – as s = 0.32.

A bit more reading of my textbooks and online searching then turned up the formula to calculate the confidence interval  = u ± z. s / √n. where z was a number pulled out of a normal distribution table to represent the confidence level. For 95% confidence, z was 1.96, and that gave a 95% confidence interval between 0.68 and 0.62. I think that means that 95% of samples from a larger population, assuming it had the same standard deviation, could be expected to fall within this range.

If the null hypothesis for the question ”How much more/less do you go to pubs?” is the “same” or 1, that would seem to indicate that a result with a 95% confidence interval between 0.68 and 0.62 does not straddle 1, and is therefore a statistically significant result. But I wouldn’t like to bet on it. Particularly since my stats textbook doesn’t seem to approach hypothesis testing quite this way. Maybe tomorrow I’ll try and figure that out.

Anyway today I worked out my first 95% confidence interval in over 40 years. I think I’ll celebrate with a few shots of Talisker later.

But the process of reading maths textbooks reminded me of one reason why I often found mathematics hard. And it wasn’t the logic or reasoning behind any of it, which is very often astoundingly simple, but the words that mathematicians use.

Like mathematics. That’s a pretty daunting word. And so also are addition and subtraction and multiplication and division. They’ve all got between 3 and 5 syllables in them. They’re big words. And they’re strange words. And mathematics is full of them. Like numerator and denominator. They’re like mountains, and doing mathematics often seems like climbing mountains, rising ever higher. Beyond denominator there’s common denominator and beyond that, obscured by cloud, lowest common denominator. And the branches of mathematics have similar menacing names. Algebra. Geometry. Trigonometry (on which one could do oneself a serious injury). Calculus. Matrices. Determinants. Probability. Statistics. And they’re all words which ring and reverberate with multiple hidden means. Matrices has a touch of matriarchy about it, calculus a vein of calcium, determinants a shot of determination.

And I would regularly get shipwrecked on these words. I’d get stuck on the slopes of lowest common denominator, suffering from vertigo, and running out of water. Or normal distribution. What on earth was “normal” about it? Was there an “abnormal” distribution? Or a “post-normal” one? Or right angles? What’s “right” about them? Are there “wrong angles”? And is a confidence interval the time it takes to build up sufficient courage to ask Mary Jane out to a dance?

Yet once I’d managed to slither my way round the word algebra, and absorbed the notion of an equation, I found algebra remarkably simple and straightforward. So also with geometry, because I already knew what a point and a line and an angle were. But I was in terror of the vast and inconceivable Theorem of Pythagoras, rising like Everest, with its deadly hypotenuses, which were a species of horse that roamed its slopes.

I think that if I was ever to try to teach mathematics, I’d want to throw away all these words, because they just cause problems. And they’re not necessary. I think I’d maybe call mathematics numbers, and addition summing, and subtraction taking, and multiply times, and division over. And then nobody – no child at least – would ever be terrified by some multi-syllabic Kilimanjaro mountain of a word.

Because we’re all mathematicians. We use it every time we buy something in a shop with money. And anyone who can figure out how much money is needed to buy 3 packets of toffees at 85 pence/packet is already a considerable mathematician. And higher mathematics ought to be as easy as shopping on the fifth floor of Debenhams.

smoker
This entry was posted in Uncategorized. Bookmark the permalink.

### 45 Responses to Numbers

1. Reinhold says:

Apart from Maths and right and left angles (insofar OT, sorry Frank, but it contains a Number at least):

Did anyone notice that there was a remarkable rally against the new EU “tobacco rules” (i.e. cancer porns on cig packs and so on) in Brussels yesterday?

There were about 3,000 showing up!

I know people who’ve been there.

But the newspapers keep quiet about it, don’t they?

• Frank Davis says:

• Reinhold, first I’ve heard about it here in the States!

• nisakiman says:

Nope. Seen nothing about it at all.

Meanwhile, in Putin’s Russia the madness continues apace:

According to the new bill, electronic simulators of smoking, chewing gum made in the form of cigarettes, and even any cylindrical and white cigarette-looking item will be banned.

http://english.pravda.ru/news/russia/21-01-2013/123530-electronic_cigarettes-0/

Although in Texas, a little ray of sanity trying to peep through the clouds of intolerance:

Sigler: Potential smoking ban more hurtful than helpful

• beobrigitte says:

It would seem all is not well in Scotland, either.

I’ve just stumbled across this:
scotsman.com/the-scotsman/opinion/comment/brian-monteith-smoke-free-scotland-benefits-only-bureaucrats-1-2748673

• beobrigitte says:
• lordsid says:

I’ve already commented there.Seem to have shut the ratz up,except one.(reply with 2 “simple” questions-apparently requiring 1 word answers-he hopes,I don’t:-)

• beobrigitte says:

This is interesting, Reinhold!!! Over here there has been/is nothing in the news about this. Perhaps the BBC has been too preoccupied with the “Asthma-Miracle”….?

When I read that the “Aerzteblatt” (Medical Journal) wrote about it I expected the usual Anti-smoker drivel. There is remarkably little of it!

Nach den Plänen der EU-Kommission vom Dezember sollen unter anderem Zigaretten- und Tabakpackungen künftig abschreckender gestaltet werden. Vor allem würden größere Warnungen und Fotos etwa von Raucherlungen Pflicht.

“According to the plans of the European Commissioners in December, amongst other measures, cigarette and tobacco packs are to be designed to deter from smoking. Above all, enhanced warnings and pictures of smokers’ lungs would be essential.”
(Again, loosely translated, apologies for errors)

Isn’t it striking how the “smoker’s lung” is thought to be “essential”? Above all, medics do know that this “smoker’s lung” exists only in the mind of anti-smokers who keep peddling the same lie over and over and over again.
However often they repeat this (and other lies) it does NOT turn it into the TRUTH.

• jackiec06 says:

Hope this works: Here’s a tweet from one of our vapers who follows tobacco/e-cig news:

Tobacconists fume at EU cigarette restrictions | euronews, world news: euronews.com/2013/01/22/tob…— Patricia Clewell (@TreeceVapes) January 23, 2013

• beobrigitte says:

Health campaigners insist that the EU has spent too much time listening to the tobacco lobby and has not been transparent about its dealings with the trade.

Which tobacco lobby?

Transparency about dealing with lobby groups is more than welcome!!! Bring it on!!!

2. harleyrider1978 says:

Seems like weve been getting our news from Medical Journals for 5 years on!

3. margo says:

Off this subject: I’ve just discovered (see Dick Puddlecote) that there is a public consultation going on about minimum pricing for alcohol and an online form for anyone who wants to take part. Have just started on answering the questions and found some of them unanswerable. DP gives tips.

4. Ian Reid says:

Frank, don’t do it! I admire your writings and your fresh perspectives, and you doggedness in getting this survey completed. But let’s be frank about this. What you have is a small self selected example of people expressing an opinion. You then heap rubbish eipdemiology on top of this. Even in it’s own terms a 95% confidence interval means there is a 1 in 20 chance it’s happened by pure chance. In other words it’s a valiant attempt, but as evidence of anything it’s worrthless. It’s the sort of nonsense they resort to, and we don’t want to be dragged down to their level. It has some value as a collection of anecdotes, but as a scientifc survey I’m afraid it doesn’t just cut it.

You should go to numberwatch.co.uk and buy one of the two excellent books of the web site to educate yourself more on epidmiology and why, as pracised today, it has become the tool of the charlatan.

• Frank Davis says:

What you have is a small self selected example of people expressing an opinion.

I don’t think that they are self-selected. The only self-selected respondents are the ones who completed the online survey, and they are only 10% of the sample. They’re self-selected, because they selected themselves to respond to the survey. The others are simply people who have been approached and invited to complete the questionnaire, and who did so.

In my own case, the methodology was to sit in pub gardens keeping an eye out for smokers, and then go over and ask them if they’d like to do the survey. I’d estimate that 95% of them agreed to do so.

If that’s ‘self-selected’, then any survey of this sort that goes around canvassing opinions in any way whatsoever has a ‘self-selected’ sample.

Even in it’s own terms a 95% confidence interval means there is a 1 in 20 chance it’s happened by pure chance. In other words it’s a valiant attempt, but as evidence of anything it’s worrthless. It’s the sort of nonsense they resort to, and we don’t want to be dragged down to their level.

I don’t think it’s at all worthless. What I’m half inclined to agree with you about is the use of confidence intervals. Because that’s the way that Tobacco Control works, to lend a spurious ‘scientific’ importance to their work. And until the last few days, I was inclined to leave it out completely. But I’m endlessly mathematically curious, and so I had to have a shot at pulling some confidence intervals out of it.

It has some value as a collection of anecdotes, but as a scientifc survey I’m afraid it doesn’t just cut it.

It’s not a collection of anecdotes. There are no anecdotes in it at all. It’s a collection of ticked boxes, plus a few remarks.

Whether it’s ‘scientific’ or not, it’s a snapshot of the response of smokers to smoking bans. A larger survey, conducted along the same or similar lines, might get a clearer picture.

I will agree, however, that it’s not science in the sense that nothing has been accurately measured (which is my complaint about all such surveys). But I don’t think that this renders it utterly worthless.

you doggedness in getting this survey completed.

I am indeed dogged, and will carry on being dogged, until I’ve doggedly got a dogged little report with some dogged findings.

Whether anyone will read it is another matter.

P.S. I bought John Brignell’s book, The Epidemiologists, 3 or 4 years ago. It’s certainly an interesting book.

• BrianB says:

I respectfully disagree, Ian.

Of course this isn’t a fully-randomised controlled trial – it is basically a public opinion survey – but that alone wouldn’t disqualify it, any more than it would a YouGov poll.

Of course, if it were designed to be published in a leading academic journal, and/or funded by a media outlet, then it would be (have been) an imperative that the sample be selected on a structured basis, following as closely as possible the demographic and geographic profile of the (smoking) population, and, no doubt backed up by a control sample of either non-smokers in similar locations, or smokers in a country where there are no bans (is there such a place?), but that would typically be dealt with in a follow-up study, under the standard caveat of “more research is needed..” (ie “please give us a grant”!).

I don’t doubt that anti-smokers would have a field day in criticising such methodological shortcomings, but then they would lay themselves open to an embarrassing debate around the facts that all anti-smoking research suffers to a lesser or greater degree from poor methology, and that more recent studies, especially those involving ‘passive’ smoking are themselves little more than surveys of opinion, gossip and conjecture. All antismoking research is poor.

All in all, as a simple snapshot of smokers’ views it is useful. Let’s be honest, no other bugger is asking for our opinions, are they?

5. Marvin says:

“Like mathematics. That’s a pretty daunting word. And so also are addition and subtraction and multiplication and division. They’ve all got between 3 and 5 syllables in them. They’re big words. And they’re strange words. And mathematics is full of them”.

———

It’s the same with all the computer jargon that’s in use today!!!
It’s also the reason Latin was used in the middle-ages, to keep the knowledge confined to a few people, and all the ‘peasants’ in the dark and completely mystified.
If it was ever made simple and accessible, then bang goes the standing of the “experts”.

6. The opposite of a right angle is not a wrong angle, it’s a left angle.

And I think Ian Reid explains better what a confidence interval is, the way you desribe it, it sounds more like ‘standard deviation’.

• Frank Davis says:

Since I’ve used the standard deviation to find the confidence interval, it probably isn’t that.

• BrianB says:

Frank’s confidence interval is correctly calculated – assuming that the data sample were deemed to be drawn from a population where the measurements (answers to the question in this case) are assumed to follow a “normal” (Gaussian) distribution.

The usual interpretation of the meaning of a 95% (say) confidence interval is that there is a 95% probability that the true value lies in the confidence range, hence a one in twenty chance that it lies outside (P < 0.05). One common problem with all P<0.05 statistical results is that 1 in 20 chance is just too high to translate correlation into causation – ie to reject the 'null' hypothesis. True statisticians would expect something like P<0.001 (ie 1 in 1,000 error probablity) before moving beyonf the "mmmm, that's quite interesting" stage. But epidemiologists are not statisticians.

For information, though, the real meaning of a 95% confidence interval is that: if you were to repeat the experiment that led to these calculated parameters (mean, SD and hence CI) 100 times, and thus create 100 new confidence intervals, then 95 of those confidence intervals would contain the true (population) value of the metric that you were calculating in the first place.

This is a hard enough concept for many statisticians to grasp, never mind the general population, which is why the more common interpretation (above) is used. It's wrong, but close enough to satisfy all but the most pedantic.

The real issue in interpreting confidence intervals, though, is that the original estimate (typically the mid point of the confidence range) is totally irrelevelant when applying the sample results to the general population, and it really should never be stated as the result of the experiment. Only the confidence interval is valid, so all you can conclude is that the 'real' value lies somewhere in that range.

But, how often to you see epidemiological studies only quote the confidence interval, without the calculated statistic? Never! They should do, but "RR somewhere between 0.9 and 2.1" doesn't sound as scary as "RR = 1.5", or, even worse, "50% higher risk", does it?

Which is one of the reasons why there are lies, damned lies and…

…you know the rest!

• BrianB says:

Sorry for an unacceptable number of typos above. Why WordPress can’t provide editing facilities is a bit beyond me. It’s such a bloody simple piece of technology!

• garyk30 says:

“RR = 1.5″, or, even worse, “50% higher risk”, does it?

Nor will you see it noted; that, since 1 is 66.7% of 1.5, a 50% increased risk means there is a 66.7% probability that what you are talking about did not cause the event.

• Frank Davis says:

Frank’s confidence interval is correctly calculated

Hurray! That will cause for another shot of Talisker later.

7. garyk30 says:

Oddly enough; way back when, math was called ‘numbers’.

But then; back then, you could teach young kids with only a 2 year college certificate.

Now; of course, teaching is a speciality and teachers expect to be treated as Gods.

Modern teachers will teach math; but, they do not teach the use of numbers.

Nor do they teach the concepts of how to use numbers.

I post what seems to be confusing math ideas; but, none of my posts involve more than just the 4 basic math ideas.

But, my country school teachers would explain numbers and not just toss out babble.

When speaking of large numbers like million, billion, and trillion they would give examples rather than just say that each is 1,000 times the other.

Back then a dollar was a lot of money and I learned like this.

1. If you spent one dollar per second(100 penny candies/second):

2. It would take you about 11 days to spend 1 million dollars

3.But, it would take you 32 years to spend a billion dollars.(When you are a kid, 32 years is a long time.)

4. It will take you 32,000 years to spend a trillion dollars.( That was like the ‘eternity’ the preacher talked about and I was impressed.)

They explained what might be hard to understand and would tie decimals to fractions to common language.

for instance:
the concept of 0.5 as a decimal.
0.5 is 0.5/1
0.5 /1 is the same as 1/2.
1/2 is one half
0.5 = one half

I do not know much about algebra; but I can do basic concepts.

I can understand that a smokers lung cancer death rate of 7/10,000 per year is a very small number and that it means that 9,993/10,000 per year will not die from lung cancer.

I can also figure out that 9,993/10,000 is a 99.93% chance of not dying.

8. Tony says:

Hi Frank,
Have you tried calculating the 99% confidence interval? That would be a more convincing figure.
To do so, just use 2.58 instead of 1.96. Or even go to 4 (from 1.96) which would give a 99.993% confidence level.
Full table of values here: http://en.wikipedia.org/wiki/Standard_deviation about halfway down the entry. Saves having to calculate them which is not easy (my scientific calculator uses Simpson’s rule to estimate definite integrals but it is still fiddly).
There may be another complication here in that your measured standard deviation might also have a confidence level attached but I don’t think Epidemiologists ever worry about that.

• Frank Davis says:

I suppose that the 99% confidence interval would be u +- z.s/root(n), or 0.65 +- 2.58 . 0.32 / 20, or in the range 0.69 – 0.61.

But the ‘sample’ I’m using in the above post isn’t an actual one from the ISIS survey, but a simple one that I made up for the purpose of (re-)learning how to calculate confidence intervals.

• harleyrider1978 says:

Frank what weve got going on is one side trying to determine high end safe levels to harm and then weve got the fanatical side trying to make a case that minute levels are harmful. With the military and combat readiness at stake is where we find our war going on. The military side versus the green wacko EPA side. The AEGL’s did not study chemcial mixtures only individuals one as thats where the harm is large doses. They talked about synergism but left it alone as synergism in nature isnt possible at least from everything Ive read so far. Remember to get a synergism to occurr as with the 3rd hand smoke study they had to build a chamber and pressurize it to 1500% of natural Hono levels to get a reading of 1 ng NNK. Now I would assume that chamber under lab conditions created a synergistic effect to create that 1ng reading. Leg Iron could probably delve deeper on that subject.

• harleyrider1978 says:

Submarines are maintained at 1 atmosphere not a laboratory chamber level at all. Ive been in both a pressure chamber and Ive worked in and run Hyberbaric chambers or Vacuum chamber/altitude chambers with flight physiologists.

• BrianB says:

“There may be another complication here in that your measured standard deviation might also have a confidence level attached but I don’t think Epidemiologists ever worry about that.”

No, the SD is the average distance (“deviation”) the data points are away from the calculated mean value – they are inter-dependent metrics. The Confidence Interval is simply a numeric range defined as a number of standard deviations either side of the sample mean according to the level of confidence (or statistical “significance”) you want to apply to the range. The range will be wider, the greater the confidence that the true population mean lies somewhere within it (every point in the confidence interval has the same probability). That is why you use “z-values” of 1.96 SDs for 95% confidence, 2.58 for 99% etc. Ultimately, though, the main determinant variables of the confidence interval extremes are the sample mean, the SD itself, and the sample size. The z-values are just constants. The confidence interval is just one way of representing the amount of statistical “error” (variance) in the sample data. The SD is the square root of this variance – so is not itself subject to further statistical error, and thus has no separate “confidence” implications. .

Your point about epidemiologists has a big element of truth in it, though, for slightly different reasons. You will no doubt see many references in epidemiological studies regarding “confounders” (other variables in the data sample that could explain the calculated statistical relationship), and how they were “controlled for” in the analysis. Similarly, raw data will often be described as having been “adjusted”, or “standardised” for the purpose of supposedly removing any sampling biases, data source errors, whatever. Now these adjustments will usually involve using imprecise statistical variables (estimates) derived from other sources to be used as eg weighting factors in the adjustment process. The original, raw data values are now replaced by “adjusted” data values.

Where epidemiologists fall down is that they invariably fail to recognise that all of these adjustment variables/estimates, and the adjustments made with them, will introduce statistical error into the analysis. In other words, each “adjusted” data value will itself now have its own implicit confidence range based upon the uncertainty (imprecision) introduced by the adjustment variable, and this should be carried through the rest of the analysis. So, when the final calculation is carried out (say the headline risk statistic), it is no longer simply based on a mean, SD and CI of a sample of data points, it should now be based upon the equivalent metrics (mean, SD) of a sample of confidence intervals

Whilst it may be tricky to visualise how this is calculated, the important thing to remember is that the error represented by the confidence intervals at each stage of the calculation (ie in each adjustment) accumulates in a multiplicative way, so that the error in the final calculation is actually significantly more than that which is assumed from treating the adjusted data values as if they were empirical (raw) data measurements. As a result, the ‘true’ final confidence interval will always be much, much wider than that which gets quoted for the given significance level..

Yes, the epidemioligists will ignore this fact of (statistical) life – mostly, I suspect, due to ignorance of the mathematical methods they are using, rather than malevolence – but it is one good reason why one should always view quoted Confidence Interval values with deep suspicion, as they will invariably be massively understating the true error (and hence uncertainty) in the final calculated statistic.

Which is one reason (of many) why most epidemiological results are just wrong.

I somehow get the feeling that I will be “red-carded” for writing all this. :o{

• Tony says:

Hi again Frank, (and Brian)
I’m a little concerned about your standard deviation calculation. If ‘s’ is the standard deviation of the underlying probability distribution for a single questionnaire then for a sample of size ‘n’ the standard deviation will indeed be s/sqrt(n). But if ‘s’ is the observed standard deviation from your sample then it should not be divided by sqrt(n) but instead used as a estimate without change i.e. m+-zs for the CI. It’s far too late to think about this sort of thing though.

I wrote this before seeing Brian’s post and given the time I’ll try to reply very briefly.
Yes I agree fully with what Brian says but I’m thinking in terms of the measured SD being an estimate of the underlying distribution’s SD. Perhaps that is not relevant here. I’m not sure but in any case I suspect that the measured SD should not be divided by sqrt(n).
I should point out that Brian clearly understands the subject much better than I do.

Apologies for the late night ramble.

• Frank Davis says:

If ‘s’ is the standard deviation of the underlying probability distribution for a single questionnaire then for a sample of size ‘n’ the standard deviation will indeed be s/sqrt(n). But if ‘s’ is the observed standard deviation from your sample then it should not be divided by sqrt(n) but instead used as a estimate without change i.e. m+-zs for the CI.

A single questionnaire would only have one answer to a question like “Do you go to pubs more/same/less?” And the answer would be either 2 or 1.5 or 1 or 0.5 or 0. And the mean value of the single questionnaire would be the same number. Since standard deviation measures the difference between the mean value and the actual value, SD would come out as zero – no deviation at all. Or perhaps I misunderstand you.

Don’t worry about late night rambles – I write one every night.

• Tony says:

On further reflection I think I’m wrong about the sqrt(n) issue because the observed sample SD is the SD of numerous single questionnaires rather than the mean value of multiple samples.
Sorry for muddying the issue. It is waaay too late.

As to Frank’s point about SD of zero, I was referring to the probability distribution of the single questionnaire. For example, a single coin toss has probability ‘p’ of heads and variance p(1-p) giving SD of sqrt(p(1-p)) but once the coin is tossed, the result is of course final.

Time for bed now.

• Tony says:

Final thought:
You said:
“Since standard deviation measures the difference between the mean value and the actual value, SD would come out as zero – no deviation at all. “
But SD is the square root of the variance which itself is the sum of the squares of the deviation so it would not be zero.

9. harleyrider1978 says:

Frank or anyone who can take this and turn it into Nuclear weapon for our side start doing it!

You can find the writing and breakdowns here in a drop down list

http://www.epa.gov/oppt/aegl/pubs/chemlist.htm

On submarine they breakdown via atmospheric pressures,even Toxic toxicology based theres the same way via atmospheric pressures.

Heres toxic toxicologys breakdown:

Cant copy it damn it in PDF its at the bottom of their chemical list

Here they have the complete list and amounts per chemical at dose rates based upon:

10 min 30 min 60 min 4 hr 8 hr and its in PPM!

These are suppose to be safe for even newborns thru 3 year olds with no harmful effects! Including folks with heart disease or asthma!

usually less than 1 hour (h), and only once in a lifetime for
the general population, which includes infants (from birth to 3 years of age),
children, the elderly, and persons with diseases, such as asthma or heart disease.

AEGLs represent threshold exposure limits (exposure levels below which
adverse health effects are not likely to occur) for the general public and are applicable
to emergency exposures ranging from 10 minutes (min) to 8 h. Three
levels—AEGL-1, AEGL-2, and AEGL-3—are developed for each of five exposure
periods (10 min, 30 min, 1 h, 4 h, and 8 h) and are distinguished by varying
degrees of severity of toxic effects.

I dont know what it breaks out to but when you can say this amount will do no harm at this level for 1 hour and no heart attacks,it blows stantons and the SG Benjamins 30 minute claims all to fucking hell!

• Frank Davis says:

You can find the writing and breakdowns here in a drop down list

Is tobacco smoke on the list?

• harleyrider1978 says:

Not as a single issue no. Mike says its more pm2.5 but the submarine scrubbers remove all that thru ionizing and charcoal activated filters ie scrubbers. The above is breakdowns of 400 chemicals at high dose rates. Some of those chemicals are in tobacco smoke. Its the exposure rate to each amount thats important at rates of super high toxcity they gave safe levels for even babies.

• harleyrider1978 says:

The point here is smoking doesnt release enough of anything to matter even in gaseous states. The same OSHA standards are applied. The smoke isnt at a level thats going to harm anyone except maybe as an irritant,but even without smoking the air aboard is still full of stale smells,odors,oily residue. The galley is rigged with its own air cleaner from cooking and was usually the best place to smoke as it cleaned all that air close by. Then if you were in the fan room,fan 5 is the main ventilation fan for the boat on most SSBN Class subs. I was a submariner for awhile.

The AEGL listing gives us ammunition with respect to high volumes of sustainable exposures with no harm even to babies. Take that time duration and break it down via normal doses and its nothing. But in effect there like saying you could be exposed to a small room of 100 million burning cid for xxx time duration and suffer no heart attack or asthma attack in people so affected!

• harleyrider1978 says:

Now not long ago we heard the EPA was conducting human experiments using pm2.5 to try and prove harm,they failed! Now are being sued over it in Federal court. As its against the law to conduct human experiments. The weve got Dr Enstrom being fired over his pm 2.5 study that showed nope it doesnt kill people. The folks at CALEPA didnt like that one bit as it would wipe out their push for more and more restrictions and put a complete end run on their cherished CARBON LAWS/GREEN LAWS. the whole damned thing is comming to a head real science versus psuedo-science……..thats where we are at!

10. harleyrider1978 says:

Frank check your bin I got a nuke weapon for us in it!

11. harleyrider1978 says:

As a first step in assisting the LEPCs, EPA identified approximately 400
EHSs largely on the basis of their immediately dangerous to life and health values,
developed by the National Institute for Occupational Safety or Health. Although
several public and private groups, such as the Occupational Safety and
Health Administration and the American Conference of Governmental Industrial
Hygienists, have established exposure limits for some substances and some exposures
(e.g., workplace or ambient air quality), these limits are not easily or
directly translated into emergency exposure limits for exposures at high levels

but of short duration, usually less than 1 hour (h), and only once in a lifetime for
the general population, which includes infants (from birth to 3 years of age),
children, the elderly, and persons with diseases, such as asthma or heart disease.
The National Research Council (NRC) Committee on Toxicology (COT)
has published many reports on emergency exposure guidance levels and spacecraft
maximum allowable concentrations for chemicals used by the U.S. Department
of Defense (DOD) and the National Aeronautics and Space Administration
(NASA) (NRC 1968, 1972, 1984a,b,c,d, 1985a,b, 1986a, 1987, 1988,
1994, 1996a,b, 2000a, 2002a, 2007a, 2008a). COT has also published guidelines
for developing emergency exposure guidance levels for military personnel and
for astronauts (NRC 1986b, 1992, 2000b). Because of COT’s experience in recommending
emergency exposure levels for short-term exposures, in 1991 EPA
and ATSDR requested that COT develop criteria and methods for developing
emergency exposure levels for EHSs for the general population. In response to
that request, the NRC assigned this project to the COT Subcommittee on Guidelines
for Developing Community Emergency Exposure Levels for Hazardous
Substances. The report of that subcommittee, Guidelines for Developing Community
Emergency Exposure Levels for Hazardous Substances (NRC 1993),
provides step-by-step guidance for setting emergency exposure levels for EHSs.
Guidance is given on what data are needed, what data are available, how to
evaluate the data, and how to present the results.
In November 1995, the National Advisory Committee (NAC)1 for Acute
Exposure Guideline Levels for Hazardous Substances was established to identify,
review, and interpret relevant toxicologic and other scientific data and to
develop acute exposure guideline levels (AEGLs) for high-priority, acutely toxic
chemicals. The NRC’s previous name for acute exposure levels—community
emergency exposure levels (CEELs)—was replaced by the term AEGLs to reflect
the broad application of these values to planning, response, and prevention
in the community, the workplace, transportation, the military, and the remediation
of Superfund sites.
AEGLs represent threshold exposure limits (exposure levels below which
adverse health effects are not likely to occur) for the general public and are applicable
to emergency exposures ranging from 10 minutes (min) to 8 h. Three
levels—AEGL-1, AEGL-2, and AEGL-3—are developed for each of five exposure
periods (10 min, 30 min, 1 h, 4 h, and 8 h) and are distinguished by varying
degrees of severity of toxic effects. The three AEGLs are defined as follows:
1NAC completed its chemical reviews in October 2011. The committee was composed
of members from EPA, DOD, many other federal and state agencies, industry, academia,
and other organizations. From 1996 to 2011, the NAC discussed over 300 chemicals and
developed AEGLs values for at least 272 of the 329 chemicals on the AEGLs priority
chemicals lists. Although the work of the NAC has ended, the NAC-reviewed technical
support documents are being submitted to the NRC for independent review and finalization.

http://www.epa.gov/oppt/aegl/pubs/butane_volume12.pdf

12. harleyrider1978 says:

← Winning versus losing energy policy
Is Climate Change the Number One Threat to Humanity? →
Claim: CO2 makes you stupid? Ask a submariner that question
Posted on October 17, 2012 by Anthony Watts

From Lawrence Berkeley National Laboratory, something that might finally explain Al Gore’s behavior – too much time spent indoors and in auditoriums giving pitches about the dangers of CO2. One wonders though what the Navy submarine service has to say about this new research:

We try to keep CO2 levels in our U.S. Navy submarines no higher than 8,000 parts per million, about 20 time current atmospheric levels. Few adverse effects are observed at even higher levels. – Senate testimony of Dr. William Happer, here

This is backed up by the publication from the National Academies of Science Emergency and Continuous Exposure Guidance Levels for Selected Submarine Contaminants
which documents effects of CO2 at much much higher levels than the medical study, and shows regular safe exposure at these levels…

Data collected on nine nuclear-powered ballistic missile submarines indicate an average CO2 concentration of 3,500 ppm with a range of 0-10,600 ppm, and data collected on 10 nuclear-powered attack submarines indicate an average CO2 concentration of 4,100 ppm with a range of 300-11,300 ppm (Hagar 2003). – page 46

…but shows no concern at the values of 600-2500 ppm of this medical study from LBNL. I figure if the Navy thinks it is safe for men who have their finger on the nuclear weapons keys, then that is good enough for me.

Elevated Indoor Carbon Dioxide Impairs Decision-Making Performance
Berkeley Lab scientists surprised to find significant adverse effects of CO2 on human decision-making performance.

Overturning decades of conventional wisdom, researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have found that moderately high indoor concentrations of carbon dioxide (CO2) can significantly impair people’s decision-making performance. The results were unexpected and may have particular implications for schools and other spaces with high occupant density.

“In our field we have always had a dogma that CO2 itself, at the levels we find in buildings, is just not important and doesn’t have any direct impacts on people,” said Berkeley Lab scientist William Fisk, a co-author of the study, which was published in Environmental Health Perspectives online last month. “So these results, which were quite unambiguous, were surprising.” The study was conducted with researchers from State University of New York (SUNY) Upstate Medical University.

On nine scales of decision-making performance, test subjects showed significant reductions on six of the scales at CO2 levels of 1,000 parts per million (ppm) and large reductions on seven of the scales at 2,500 ppm. The most dramatic declines in performance, in which subjects were rated as “dysfunctional,” were for taking initiative and thinking strategically. “Previous studies have looked at 10,000 ppm, 20,000 ppm; that’s the level at which scientists thought effects started,” said Berkeley Lab scientist Mark Mendell, also a co-author of the study. “That’s why these findings are so startling.”

(caption)

Berkeley Lab researchers found that even moderately elevated levels of indoor carbon dioxide resulted in lower scores on six of nine scales of human decision-making performance.

While the results need to be replicated in a larger study, they point to possible economic consequences of pursuing energy efficient buildings without regard to occupants. “As there’s a drive for increasing energy efficiency, there’s a push for making buildings tighter and less expensive to run,” said Mendell. “There’s some risk that, in that process, adverse effects on occupants will be ignored. One way to make sure occupants get the attention they deserve is to point out adverse economic impacts of poor indoor air quality. If people can’t think or perform as well, that could obviously have adverse economic impacts.”

The primary source of indoor CO2 is humans. While typical outdoor concentrations are around 380 ppm, indoor concentrations can go up to several thousand ppm. Higher indoor CO2 concentrations relative to outdoors are due to low rates of ventilation, which are often driven by the need to reduce energy consumption. In the real world, CO2 concentrations in office buildings normally don’t exceed 1,000 ppm, except in meeting rooms, when groups of people gather for extended periods of time.

In classrooms, concentrations frequently exceed 1,000 ppm and occasionally exceed 3,000 ppm. CO2 at these levels has been assumed to indicate poor ventilation, with increased exposure to other indoor pollutants of potential concern, but the CO2 itself at these levels has not been a source of concern. Federal guidelines set a maximum occupational exposure limit at 5,000 ppm as a time-weighted average for an eight-hour workday.

Fisk decided to test the conventional wisdom on indoor CO2 after coming across two small Hungarian studies reporting that exposures between 2,000 and 5,000 ppm may have adverse impacts on some human activities.

Mendell-Fisk

Berkeley Lab scientists Mark Mendell (left) and William Fisk

Fisk, Mendell, and their colleagues, including Usha Satish at SUNY Upstate Medical University, assessed CO2 exposure at three concentrations: 600, 1,000 and 2,500 ppm. They recruited 24 participants, mostly college students, who were studied in groups of four in a small office-like chamber for 2.5 hours for each of the three conditions. Ultrapure CO2 was injected into the air supply and mixing was ensured, while all other factors, such as temperature, humidity, and ventilation rate, were kept constant. The sessions for each person took place on a single day, with one-hour breaks between sessions.

Although the sample size was small, the results were unmistakable. “The stronger the effect you have, the fewer subjects you need to see it,” Fisk said. “Our effect was so big, even with a small number of people, it was a very clear effect.”

Another novel aspect of this study was the test used to assess decision-making performance, the Strategic Management Simulation (SMS) test, developed by SUNY. In most studies of how indoor air quality affects people, test subjects are given simple tasks to perform, such as adding a column of numbers or proofreading text. “It’s hard to know how those indicators translate in the real world,” said Fisk. “The SMS measures a higher level of cognitive performance, so I wanted to get that into our field of research.”

Strategy and Initiative

Strategic thinking and taking initiative showed the most dramatic declines in performance at 2,500 ppm carbon dioxide concentrations.

The SMS has been used most commonly to assess effects on cognitive function, such as by drugs, pharmaceuticals or brain injury, and as a training tool for executives. The test gives scenarios—for example, you’re the manager of an organization when a crisis hits, what do you do?—and scores participants in nine areas. “It looks at a number of dimensions, such as how proactive you are, how focused you are, or how you search for and use information,” said Fisk. “The test has been validated through other means, and they’ve shown that for executives it is predictive of future income and job level.”

Data from elementary school classrooms has found CO2 concentrations frequently near or above the levels in the Berkeley Lab study. Although their study tested only decision making and not learning, Fisk and Mendell say it is possible that students could be disadvantaged in poorly ventilated classrooms, or in rooms in which a large number of people are gathered to take a test. “We cannot rule out impacts on learning,” their report says.

The next step for the Berkeley Lab researchers is to reproduce and expand upon their findings. “Our first goal is to replicate this study because it’s so important and would have such large implications,” said Fisk. “We need a larger sample and additional tests of human work performance. We also want to include an expert who can assess what’s going on physiologically.”

Until then, they say it’s too early to make any recommendations for office workers or building managers. “Assuming it’s replicated, it has implications for the standards we set for minimum ventilation rates for buildings,” Fisk said. “People who are employers who want to get the most of their workforce would want to pay attention to this.”

Funding for this study was provided by SUNY and the state of New York.

# # #

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit http://www.lbl.gov.

============================================================

Given what I’ve learned about the Navy exposure, I think this is just another scare tactic to make CO2 look like an invisible boogeyman.
http://wattsupwiththat.com/2012/10/17/claim-co2-makes-you-stupid-as-a-submariner-that-question/

13. harleyrider1978 says:

5.3 Air Purification
5.3.1 CO-H2 Burner
This is used to remove CO, H2, hydrocarbons, and other contaminants by oxidizing
them to CO2 and H2O. The system draws preheated air through a CuO/MnO2 catalyst bed at
about 600° F. The product gases are cooled and passed through a bed of Li2CO3 to remove
any acidic gases (like HCl from destruction of refrigerants). LiOH is a minor component of
the catalyst as well, so some of the acids are removed at that stage. In the final stage, the air
is passed through activated charcoal, a simple absorber. The catalyst can be used
indefinitely, if not abused, and it does not require much additional fuel once it has reached
operating temperature.
5.3.2 Activated Carbon
Charcoal can be prepared from any carbonaceous material and is activated by the use
of controlled heating, such as with steam. The heat removes all material from the capillaries,
which cannot be carbonized. Activated charcoal differs from charcoal in that activation
increases the vapor adsorption capability of charcoal. For submarine service, the activated
coconut shell charcoal is generally referred to as activated carbon.
The behavior of activated carbon in removing contaminant gases is a complex