Tea Time – 03 January 2017

Save time and effort – empty your wallet into the dumpster

Consumer products have a tendency to become commodities rapidly.  Look at the pace in which cell phones and tablet computers have proliferated.  The age-old Cathode Ray Tube (CRT) televisions were quickly replaced by flat-paneled LCD and LED models.

Prices start high and as more providers enter the market, they prices drop and eventually stabilize.  The stable point would be the survivors who have reached, what economists call, Minimum Efficient Scale (MES).  At this point, manufacturers have minimized the component cost of the product, maximized the efficiency of production and, unless a new technology arrives, minimized the consumer cost of the product.

However, every once in a while you will see a product that is much lower in cost than apparently equivalent products.  How can this be?  Well, if the same content is required to most efficiently produce the product you can simply reduce the quality of the product.  This can be done by using cheaper components and sub-standard materials.  This seems like a bargain.  But, it comes to naught as the components fail due to shorter lifespans.  One example is below in which the electrolytic capacitors were under-rated and thus heated up and blew their seals thus rendering an entire television useless.

BadCaps

That being said, it is not a world of “the more you pay the more it is worth.”  It always behooves a person to investigate expected prices for commodity products and make an informed purchase based on evidence.

If you want to give a low cost gift or save a buck buy buying the $10 Christmas special, do yourself a favor and just empty your wallet into a dumpster instead.  You will be saving the planet of the plastic and packaging that would end up in the landfill shortly anyway and substituting it with easily biodegradable paper.

Time to put the kettle back on.

Tea Time – 02 Jan 2017

There is an amazing amount of research being put in place toward aging.  Of interest is an article on the difference in cell proteins of young vs old animals (http://www.cell.com/fulltext/S2405-4712%2815%2900110-6).   Apparently there is only a 10% variation in the proteins that are produced in old vs young animals.  On top of that, the variation is not consistent across all organs.  This makes sense as some tissue regenerates rapidly  (e.g., liver, stomach lining, skin) while other tissue does not (e.g, neurons).

Ok.  So, our organs will go bad at different rates.  Livers and hearts should be right around the corner.  See the likes of Revivicor (http://www.revivicor.com/) which is working diligently toward the genetic humanification of pig organs via genetic manipulation.  But neurons?   There in lies the rub and the need for additional research in neuro-degenerative diseases such as Parkinson’s, Alzheimer’s, frontal-lobe dementia and the like.  I cannot think of anyone who wishes to live forever in a fit mindless body waiting to have their diaper changed.

So, what happens if we fix these maladies?  Well, there is currently a balance between the growth and death rate of the human population that is somewhere below 2%.  This growth rate is not uniformly distributed across the planet’s geography.  (https://www.cia.gov/library/publications/the-world-factbook/rankorder/2002rank.html).  Along with this is an uneven distribution of arable land across the planet (https://en.wikipedia.org/wiki/Arable_land).

If we gave 0.25 Acres of land toward food per person, and there are 13958000 acres of arable land, then 13,796,387,658 people could be fed.  Of course, this assumes good weather, insects at bay, and healthy crops all around.  If there are 7.4 billion people currently on the planet (https://en.wikipedia.org/wiki/World_population)  then in theory we could increase the population by 160%.

Looking at the clever graphs provided by the United Nations (https://esa.un.org/unpd/wpp/Graphs/Probabilistic/POP/TOT/) we are probably good beyond 2100 iff all goes well.  Looking at a 90% projection (~12.4B people) says we have until about 2080 (looking at the linear projection from the observed line) before we hit our die-off.  Stuart McMillen did a fine job of illustrating the situation  (http://www.stuartmcmillen.com/comics_en/st-matthew-island/).

Time for another cuppa.

 

Tea Time – 11/29/2015 – The Elastic Internet

It is interesting that Google alone boasts more than 2.4 million servers (http://www.circleid.com/posts/20101021_googles_spending_spree_24_million_servers_and_counting/) and Microsoft had crossed the 1 million server mark back in 2009 (http://www.extremetech.com/extreme/161772-microsoft-now-has-one-million-servers-less-than-google-but-more-than-amazon-says-ballmer).  Assuming that each server draws about 750 watts (between compute and storage nodes) that means at least 3.4 million servers times 750 watts or 2,550,000 kilowatts of power consumption.  On top of this, there is a requirement for cooling.  Suppose that the cooling is done via heat-pumps (air conditioners).  With an energy efficiency ratio (EER) of 8 (https://www.e-education.psu.edu/egee102/node/2106) and a conversion of 3.41 BTU/hr per watt  means an additional 1,086,937 kilo watts must be included to remove the heat of the servers.  So, 3,636,937 kilowatts total to keep the servers online.  Coal energy is measured in kilowatt hours per ton (https://www.eia.gov/tools/faqs/faq.cfm?id=667&t=2) so we convert to kwh by multiplying by the number of hours per year -> 365.25 * 24 = 8,766 hours / year.

From the above calculations, the total KW per year needed for Google and Microsoft is 31,881,394,125 kwh.  Using a conversion of 1904 KWH per ton of coal, it thus takes 16,744,430 tons or 15,222,209,091 kilograms of coal per year to keep them running.  The density of coal depends on the type.  However, we can use an average between 641 and 929 kg / cubic meter (http://www.ask.com/science/bulk-density-coal-e55167b75b4deafc) or 785 kg / cubic meter to figure out the size of the coal lump we will need.    Dividing the number of kg of coal required by the density gives us 19,391,349 cubic meters.  Taking the cube root of that gives us a cubic lump about 269 meters per side.  For those using American units, we multiply by 3.28  feet / meter to arrive at 881 feet per side or a little less than 2 and a half American football fields per side.

This is where elasticity comes into play.  An elastic server application uses software driven automation to determine whether a server is heavily loaded or lightly loaded and can take steps to remove power from lightly loaded servers. Thus, depending on how loaded the servers are at a given time of day, the number of servers drawing power may be decreased and thus the size of that lump of coal.  A really good application for such automation is streaming video. The majority of the population sleeps at night and works during the day.  Thus, servers that provide video to populations can be assumed to have their heaviest load during the evening hours when people are off of work but not asleep.  The rest of the day would be a light load on those servers.  Assuming 8 hours of heavy load out of a day this means a savings of 2/3 or 10,148,394 kg of coal per year. Now wouldn’t that be fabulous?

Tea Time – 11/7/2015

It must have been 25 years ago when it dawned on me that, in the future, we would have fabulous products that didn’t work all the time.  You see, I was working at a consumer electronics company churning out circuits and code for gadgets at a feverish pace.  The rule was you had to get the product fast to market and at a very low price.  I reflect on that now as I attempt to use my tablet computer and the application crashes.  You see, reliability is inversely proportional to complexity.

 

Think of the Space Shuttle.  Per https://www.nasa.gov/pdf/566250main_2011.07.05%20SHUTTLE%20ERA%20FACTS.pdf there were about two and a half million parts in the vehicle.  The parts in a space vehicle are classified as class ‘S’ parts. ( http://engineer.jpl.nasa.gov/practices/1203.pdf )  These are the highest reliability parts available.  Now, suppose they had a reliability of 99.999%.  That means there is a failure rate of 0.001%.  Thus, there is a probability that 2,500 parts may be failing.

There are ways to figure out how to increase the reliability.  The usual first attempt is to install a redundant system.  However, this is often not done correctly.  Suppose you have two power supplies in your home computer.  If one fails then the other will take over for it.  But, they are plugged into the same outlet.  A common mode failure of the power in the outlet will still cause the system to fail.

Then there is cross-coupling.  Suppose have two computers monitoring a device.  They also send a message to each other to make sure the other is alive.  They are both on different power systems and both have different mechanisms to observe the device they are monitoring.  If anything should fail, they are to send out a message to headquarters telling of the need to attend to the device.  Now suppose that the first computer has an electrical fault and it burns out its little electronic brain.  Along with its conflagration,  the electrical fault travels down the communications connection between the two monitors and deals a similar fatal blow to the second computer.  Nothing is sent back to mission control and the critical part that is being monitored falls into a silent death.

Perhaps the end all in reliability analysis is the Failure Mode Effect and Criticality Analysis http://rsdo.gsfc.nasa.gov/documents/Rapid-III-Documents/MAR-Reference/GSFC-FAP-322-208-FMEA-Draft.pdf .  This analysis is performed in both spacecraft as well as in medical life-support devices.  At issue with the FMECA is that it costs a great deal if the system is not documented.  Therefore, if you are designing a system from scratch, it is best to include an FMECA just after the first high-level design review and use the results to iterate the findings back into the requirements.

Here is another fun thing to do.  Take a drawing of your system and then X out one of the components.  Was any type of alert sent to notify of the failure?  If not, then move up the chain and fail another component until something is noticed.  Does your system fail catastrophically?  What happens if you do notice the failure but the Mean Time To Repair (MTTR) is longer than the Mean Time Between Failure (MTBF) of the components?  This means that, even if you have a backup system, if you don’t repair the failed system before there is a high probability of failure of the backup then the whole system may go down.

Ok.  So what is all of this stuff worth?  If you are not in the technology industry, think of an expensive item you buy that is very complex.  Lets say, your laptop.  Yes.  You have just purchased a new laptop.  You should just start using it right?  Wrong.  These things have limited warranty and are cranked out in the thousands per year.  The warranty is usually for 1 year.

Most failures occur within the first 100 hours of the life of a part.  Thus, if you have been shipped a lemon laptop, it would behoove you to turn the thing on and make it so that it comes out of power fail mode and stays oun (or buy a Wiebetech Mouse Jiggler http://www.amazon.com/CRU-Inc-30200-0100-0011-WiebeTech-Jiggler/dp/B000O3S0PK) .  It is also good if you can have it do something while it is powered up for 100 hours but just letting it run and ensuring its disk drive is spinning is good enough.  If it makes it through the first 100 hours then it will most likely survive until it wears out.

Another cup?

 

Tea Time – 9-12-1015 — Big Herb

I have not yet figured out why there is a human propensity to distrust peer-reviewed studies and basic science. Sugar is a case in point. High fructose corn syrup is villianized but agave syrup is good for you? A quick use of a search engine and a restriction to .gov and .edu provides information describing the chemistry of both. (https://www.nlm.nih.gov/medlineplus/ency/article/002444.htm) Agave syrup (nectar?) turns out to be fructose. Corn syrup turns out to be glucose syrup.

Dig further and one finds that high fructose corn syrup is what you get when you use an enzyme (xylose isomerase) to convert the glucose in regular corn syrup into fructose (An enzyme is a protein that serves a particular biochemical function such as breaking or combining molecules). Thus, the main sweetener in agave syrup is chemically the same as high-fructose corn syrup. But, is it bad?

Too much of anything is bad. Sugars (anything that ends with ‘ose’) are empty caloric units and thus can be problematic. If you consume 3500 extra calories (technically kilo-calories) above your normal daily requirements, you have the makings of an additional pound of energy stored as fat.

So, why do we crave sugars? Why do we crave fats? Well, from an evolutionary point of view, we are the descendants of hunter-gatherers who were most likely to survive if they could sense and consume scarce high-calorie food sources would seem to be the most likely reason.

So, back to agave syrup. Why is it marketed as natural and better for you? Well, because it is a great way to extract money from a consumer. In the United States, the Food and Drug Administration (FDA) does not regulate supplements nor provide guidance on what is natural. (http://www.fda.gov/Food/DietarySupplements/default.htm) Thus, with it being under-funded, few of these products are tested unless someone complains.

So, no one controls dietary supplements.  To me, that means be afraid.  Many people fear “Big Pharma,” the companies who make approved pharmaceuticals.  Few people seem to fear “Big Herb,” the companies who don’t back research on their products as they may discover they are worthless or worse — dangerous.  (http://www.sciencemag.org/content/349/6250/780.summary?sid=e0a31d95-c418-4ef7-a82b-75cd460101f2 You may need to go to the library to read it as it is a subscription service) .

While “Big Pharma” must go through years of clinical trials to prove the safety and efficacy of their products, you or I could harvest our weekend crop of poa pratensis, dry it, put it in gel caps and sell it as a supplement “as effective as echinacea” since there does not seem to be any body of research indicating echinacea is other than a placebo nor poa pratensis as other than fiber.  Oh,  in case you didn’t look it up, poa pratensis is the botanical taxonomy for, “Kentucky blue grass,” which is what grows as a lawn in many American yards.

So, one lump or two?

Tea time – 9-8-2015

What a strange fascination we have with making robots that replace humans. Bina48 is a talking head (bina48) .  The Japanese are really into it (just go to youtube.com and search for ‘Human like robots’ and you will get an eye and ear full.

Why?  Are there not enough people in the world?  I’m pretty sure they eat more coal powered electricity and are less efficient than the 100 W bio-powered systems they imitate.  Perhaps “Actroids” are cheaper than humans but you must have to pay the voice over folks right?

How about AI?  There seems to be a great resurgence in fear of AI.  If you have ever had to work on over a million lines of code with little documentation and an expansive team you know that no one person knows how each section actually works.  Thus, you end up with people who specialize in portions of a code base.  Or, how about something as complex as a space station?  No human has the whole thing in their head.  However, if you could make an AI with a much larger and more available memory that could take the whole thing in and understand every nuance of every system.

Here is another example.  Biology is VERY complex.  Not only do you have to deal with ATC and G patterns for replicating proteins but you also have to deal with the methylation of start sections that disable or enable portions via environmental influence.  Genetics + epigenetics = exponential complexity.  Ok.  So none of us can fathom the entire thing at once.  But a very cool AI could.  It could fathom the entire thing at once and assist with the development and evaluation of the exabytes of genetic data and it wouldn’t even have to sleep.  Probably eat a lot of coal though.  Can’t be helped.

So, why do some people fear AI and why do some people insist on making silly human-like robots?

Software development processes

There is a very nice article on Quora at : http://www.quora.com/Agile-Software-Development-1/Why-do-some-developers-at-strong-companies-like-Google-consider-Agile-development-to-be-nonsense

Steve McConnell is my favorite author on software development processes.  His book, Rapid Development ( http://www.amazon.com/Rapid-Development-Taming-Software-Schedules/dp/1556159005/ref=sr_1_1?ie=UTF8&qid=1441682139&sr=8-1&keywords=Rapid+Development&pebp=1441682140131&perid=0DV2A3BNGGAG9W0SRYT7 ) goes through the long history of development practices prior to Agile.  It is my true hope that Steve updates the book to include his views on what Agile means.

Per the article, Agile methods work very good when one has a moving target and you need to rapidly change.  However, the author explains that Agile and Stable are two different worlds.  One can see where being agile with web page content is a good thing.  Perhaps not so much with the software in a cardiac pace-maker? Maybe not the software controlling your anti-lock brakes or your accelerator in your car either.

All that being said, I cannot wait for someone to come up with a term that replaces “Agile” since it is, to me, just another marketing buzzword like so many I have seen in the past.  After all, one cannot be expected to develop good software without having a large number of high-dollar consultants about to tell you what you should do.  The binary Agile vs Waterfall is one of the most irritating.  Are there no other processes in the past rather than Waterfall?  How about Agile vs the Spiral Development Model?  How about Agile vs the Rational Unified Process?

Jira is another tool to be abused.  It seems to be a ticketing system that is slowly trying to evolve into something as useful as Microsoft Project.  Very much like the Job Shop model from operations analysis, Jira builds a backlog of software items to be manufactured and they are placed into various worker’s production queue.  The developer works through the items in their queue.  It is a pull model.  The worker pulls the next piece of work from their queue when they are ready to work on it.  They break down the partial delivery times in “sprints.”  If something doesn’t get done in sprint it is just “technical debt.”  People love buzzwords doen’t they?  Perhaps someone would be so kind as to fix the Wikipedia article on Agile Software Development?  (https://en.wikipedia.org/wiki/Agile_software_development)

Thus, my opinion is that the word “Agile” associated with “Software Development” should be abolished and a new term used with concrete process oriented definitions.  I mean, isn’t it supposed to be a development process?  We could call our new method the “Nimble Software Development” methodology.   Of course, we would have to market it heavily and make it as ambiguous as possible so that everyone thinks that whatever they are doing it is wrong.  Oh.  Wait….  Hmmmm.

Tea time observations – 9-6-2015

It is interesting that gasoline is not a single chemical. Rather, it is a mixture of hydrocarbons with a lengths between 4 and 12 carbons long. UNLEADED GASOLINE (UNBRANDED) MSDS No. APPC975 Ver. 1. Print Date: 05/19/2003 lists some of the contents as:
BENZENE, CYCLOHEXANE, ETHYLBENZENE, METHYL TERT-BUTYL ETHER, TOLUENE and XYLENE.

Most people do not realize the fantastic energy density contained in a liter of gasoline. (http://www.eia.gov/todayinenergy/detail.cfm?id=9991) EIA states that even gasoline diluted with 10% ethanol still provides about 120000 BTU per gallon (31700 BTU per liter – 9.3 KWH per liter). Compare that to our highest density, publicly-available lithium-ion battery which boasts a mere 0.25 to 0.675 KWH per liter.

Now suppose that there is only a 25% energy conversion efficiency between the gasoline to electricity function.  That still leaves 2.3 KWH per liter of gasoline.

Ok.  So you buy an electric car.  Now you are burning coal (9.8 KWH/Kg) with a conversion efficiency of 33 50 45% (EIA again http://www.eia.gov/tools/faqs/faq.cfm?id=107&t=3) then assume a 40% power transmission loss to heat generated in the power lines.  On top of that, you can only get an 80 to 90% charging efficiency for LiPo batteries and that doesn’t include the loss due to AC power rectification at the point of delivery.

Interesting isn’t it?