Observations on changing electricity and gas supplier.

I did a little bit of spreadsheet work to decide whether I should change electricity provider. I have made a few observations which may be of interest.

My 2017 usage was 4,868 kWh of electricity and 21,563 kWh of gas. My first observation is something has happened whereby gas and electricity are now bought by the kWh and not the cubic foot, which I think was the case the last time I thought about this kind of thing.

Charges are 19.9983p per day standing charge for each fuel, 17.4195p/kWh for elec and 19.9983p/kWh for gas. My second observation is that these are amazingly similar, given all the losses involved in making electricity, much of it from gas. My current tariff is Co-op energy Green Pioneer.

Choices according to uSwitch: Co-op energy “fix and fly” or Bulb. My third observation is uSwitch were really motivated to organise the switch for me. I guess they are on commission (potentially undermining the impartiality of the advice they give), but I had to say twice “I do not want you to call me again” to make them not do so. This was a surprise.

Elec SC Elec Rate Gas SC Gas Rate
Co-op Green Pioneer 20.00 17.42 20.00 4.39
Co-op Fix and Fly 23.29 14.96 23.29 3.29
Bulb 24.56 13.79 24.56 2.67

My fourth observation is that the rates ARE less in the proposed supplier/tariff, but the daily standing charges are more. If you were interested in bamboozling the consumer, this is exactly what you would do, isn’t it? Clearly, in this set up, some users WOULD pay less under the new regime, and some would definitely pay more. In which group would I be?

This problem can be solved algebraically but I chose to do a spreadsheet and two graphs:

Electricity:
elec
(Y axis is £ per month, X axis is annual kWh usage, Arrow marks my 2017 usage)

Gas:
gas

(Y axis is £ per month, X axis is kWh annual kWh usage, Arrow marks my 2017 usage).

So switching to the other Co-op tariff would take my annual costs from £1,908 to £1,579. Switching to bulb takes it to £1,401. Savings of £328 and £506 respectively. This is a lot. To lose out by switching I would need to be a very light user. Less than 1000 kWh of electricity (not 4686) a year and less than 3000 kWh of gas (not 21563) would result in paying more if I switched.

Well, switching away from Co-op is a step away from decapitalising my life (that is, buying things and services from non-corporate models of organisations when I can), but Bulb is all renewable and is a B Corp, and pennies are a little short this year for various reasons, so I have switched to Bulb. My fifth observation is how genuinely quick and easy it was to start the switch. The process is simple and laid out for me and will take a few weeks.

Oh yes: Bulb gives out bribelinks to new sign-uppers, so if you sign up to Bulb using my link, then I get £50 and you get £50. I am not sure how I feel about this. In fact, I am sure how I feel about this. I think it is bad and makes rational purchasing decisions impossible, and it makes it impossible for you to read this blogpost without wondering if I am doing it for the wrong reason. I am against this kind of thing. I am also a bit pissed off to find out too late that the internet completely covered with people’s own Bulb bribelinks, asking people to use theirs to switch.

Resentfully, here is my bribelink: bulb.co.uk/refer/william26

Turning a 2D spreadsheet array into a single long list.

I’ve just worked out this poisonous little trick and did not find anything about it on the internet, so here it is:

Problem:

Hospitals in a survey have between zero and 47 ‘areas of work’ which psychologists undertake. I have 199. That multiplied by the 47 is too many to do by hand. I have a list for each hospital. I want a single list which I can then sort, remove duplicates, etc.

Solution:

  1. At the bottom of the first column, add a cell which references the cell at the top of the second column. Use relative addressing.
  2. Fill-Left this new cell all the way across to the second last column.
  3. Fill-Down this new row to r*c extra rows (in my case it is 38 hosps by 47 answers, which is a lot.
  4. The first column, stretching down and down and down, contains every cell in the array in a single long column.
  5. Copy and paste ‘values’. You can now delete the remaining 46 columns.
  6. As a check, you can count the non-missings of the array, then count the non-missings in the now-extended 1st column. If they are different, then you have a problem somewhere.
  7. Then sort/remove duplicates/etc to your heart’s content.

 

Proms and private hospitals.

This morning I tweeted a rash tweet in response to an article on the BBC website which compared NHS patients having certain operations in NHS hospitals with those who go private but the NHS still pays.

Here is the tweet which drew my attention to the article:

Here is my rash tweet:

And here is a reply:

Now, that tweeter has been on at me for eleven years to start a blog of my own. So, today is the day.

I am an epidemiologist and spend some of my professional life persuading other researchers to tone down their scientific claims. This is a problem for all kinds of reasons, including that academics are inappropriately incentivised to overhype their findings. Imagine the pathology department in your local hospital gave prestige, job security, advancement, praise and more money to the individual pathologists who found the most cancer in the biopsy samples they are sent. Not to the ones who most accurately identify cancer: The ones to detect the most – and the rarer the better. This is how academics are incentivised and it is a disaster. But it is not today’s disaster.

The tweet that caught my eye this morning was about an article by Chris Cook. He is policy editor for Newsnight. The headline claimed NHS patients who went to private hospitals for their NHS operations, paid for by the NHS, had “better results”. The reason I tweeted a sigh about it was because there was far more than 140 characters to say about it.

Before I go on, I should say that data should not be only available to professionals like me. I am in favour of data journalism and I am not lobbing any rocks at Chris Cook or the Newsnight team. This could be a story of misincentivised journalists and for that matter media organisations, but that is not today’s disaster either. I should also say I am not utterly utterly wedded to a public NHS either and am interested in a finding which might throw some light on the issue. Private involvement in healthcare provision is widely thought to be a disaster and I have seen some of that myself, but the ‘proper’ NHS is hardly insulated from being a disaster at times. These two disasters are also not today’s.

There is a simple way to think about studies concerning causality. First think about the exposure, then the outcome, then the process of comparison, then what it means including reasons for caution. The well-known PICO mnemonic is similar: Population, Intervention, Comparison, Outcome, which is really about defining a research question for an experimental study but could be used here.

The exposure was having your operation in a private hospital rather than in the good old NHS hospital. In this case, the operations were hip replacements and knee replacements. Easy! For completeness, we should talk about the population, which is people who had either of two operations: replacements of hips and knees.

The outcome is score on questionnaires called PROMs. This stands for Patient Reported Outcome Measures. These are questionnaires people complete before and after their operations in an effort to capture the improvement gained. Their use is now mandated in the NHS for the above two operations, plus varicose veins and groin hernia repair as well.

So at its heart this analysis looked at the people who had their operations in the NHS and the people who had their operations in the private sector and found the people who had their operations in the private sector scored better. Bingo! More private involvement in the NHS please!

Not so fast! Every young epidemiologist is taught to think about four explanations of any association before considering causality. They are chance, bias, confounding and reverse causality. I am going to focus on confounding.

Confounding is an non-causal alternative explanation for the detected association. My favourite teaching example is the startling relationship between having grey hair and dying in the next ten years. Does this finding mean we should infer causality, and demand Grecian 2000 be available to all on the NHS? Or could it be that the grey hair is associated with being older, and being older, sadly, is associated with greater risk of dying? Suddenly all that hair dye is looking like one of those ‘NHS waste’ scandals the newspapers love.

Could the relationship in the article be confounded? Could there be another explanation? in this case there could be dozens or hundreds: Private hospitals tend to not have intensive care units and are notoriously sparsely staffed, so the patients who get in are likely to be less ill than the ones who do not because the decision over where each patient is to go is taken by a clinician who will be managing clinical risks. Also think on this: Poor health and poor health outcomes tend to be concentrated in areas of high deprivation. Now, do you think areas of high deprivation are more likely or less likely to have a private hospital nearby which could take on operations from the NHS? If you are thinking that poor areas are more likely to have worse health outcomes, but less likely to have private hospitals then that’s good. You have probably worked out already that this could produce the reported results, even though it tells you nothing about whether private provision is better. Welcome to my world. Welcome to my job of constantly telling people they have not found what they think/hope they have found. Yes, I often sit alone at lunchtime.

The article reported exactly this, that the people in the NHS hospitals were already sicker than the ones in the private hospitals, but claimed this had been taken into account by the use of a regression model. I’ll have to blog about this in the future, but to cut a long story short, when the variables capture everything about a thing (male/female is pretty good. ‘looks a bit peaky’ is not), when the relationships between those things are quite well understood and when the mathematical distributions of the various factors are known then it works quite well. Sadly, a complex clinical choice to place someone in one hospital rather than another will not be captured by some simple answers on a questionnaire, so there is likely be so-called ‘residual confounding’ when regression adjustments are made in this kind of setting so the remaining relationship is sill confounded. It is this Giant Jenga of assumptions that made me sigh in my tweet this morning.

The problems above are only the small problems. Really they are only some of the small problems. In real science (when functioning properly), before your work is out there in front of the world, being believed and acted upon by politicians, clinicians, individuals, it undergoes ‘peer review’, where all these details are stress tested by a critical friend, often protected by anonymity. Peer review has its own problems but that is not, perhaps expectedly, today’s disaster either. Authors have to show their working to describe what they did and why. The article lacks the basics by which the underlying study might be judged. What was the sample size? What assumptions were used in the regression model? How many missing data points were there? How did they handle the missing data? What assumptions did the statistical tests make? Without these properly reported, and with no real rationale for why the study was done, its impossible to give it a fair assessment.

We know not what, we know not wherefore and we know not whom. I counsel caution before acting, friends.