Saturday, March 29, 2014

Fuming over Carbon Emissions Reporting



You know what really grinds my gears? When people give credit to the US for decreasing its emissions via increased use of natural gas while simultaneously scolding Europeans for burning more coal. It’s almost as if they forgot the first part of their argument by the time they got to the second part. Let’s just repeat after me: coal is traded globally. By reducing prices of natural gas in the US to unheard of levels, coal becomes less competitive domestically creating arbitrage opportunities for domestic coal suppliers and foreign end-users, or to put it more succinctly, cheaper coal has got to find some plant to burn in.  

It should be noted that the reason coal is being exported while natural gas is not is because LNG export-terminals have to be built, a very capital intensive and regulatory daunting undertaking. The regional nature of the gas markets means gas in the US will stay pretty close to home. This disparity can be seen by the fact the UK price for natural gas, the National Balancing Point, trades at 2-3 times the price of gas in the US, the Henry Hub Spot price. In fact, the regional nature of gas markets is one of the biggest reasons why for all the fomenting by Europeans over recent Russian escapades in Crimea, they will not be able to find an alternative gas supply anytime soon. 

So yes, the US is reducing its own emissions, but simultaneously those emissions are being exported to Europe. In fact, this theoretically could be welfare enhancing considering that emissions have a price in Europe, albeit a very low one. It’s this sort of refined analysis I keep looking for. Unfortunately, I just keep getting stuck with blame the Germans and God Bless the USA.


Wednesday, January 8, 2014

Netflix and Its People Problem

Felix Salmon recently wrote an excellent piece detailing some current problems with the Netflix model. Studios have all the bargaining power, anytime they see big profit numbers generated by streaming providers they can just demand higher prices for their content. This is why you see Netflix and HBO running towards creating their own content to try and escape this intensifying bidding war.

As a result of these bidding sprees, Netflix has begun to lose out on content quality. To rectify this, Netflix’s recommendation algorithms has had to get more sophisticated to try and determine preference patterns amongst a landscape devoid of quality. Without high quality content, Netflix now has to grope around a dark room of content, using touch and feel in lieu of the more accurate vision, leading to bumps, bruises and constantly recommending Iron Man 2.

This approach runs into two seemingly related problems. First, as my previous post eluded to, individuals do not have innate preferences for many goods and experiences. Let’s say someone described everything there was to know about ice cream, from the sweet sensation of the cream as it melts on your tongue, to the molecular structure of cream, sugar and ice particles. Would you then be able to predict whether you would like it or dislike it? Well I like cold things, like snow, but ice cream isn’t really snow. No I like milk a lot, but what about that sugar and those flavorings. Would your enjoyment of the individual parts of ice cream guarantee that you were going to love ice cream? This is what Netflix’s new recommendation system is betting on, I can tease apart differential aspects of movies and triangulate stable preferences. Unfortunately, very subtle things can change experiences of events.

A now famous study by Dan Ariely highlights the malleability of experience. He opened a class with a brief reading of a Walt Whitman poem, and told students he would be doing a few short poetry readings one evening. The class was then split into two groups. The first group was told the cost of the show would be $10, and asked if they would accept that and what they would be willing to pay to see the show. The second group was told that they would be paid $10 to see the show, and then asked the same set of questions. The first group said they would be willing to pay $1 to $5 dollars versus the second group who said they would be willing to go if receiving $1 to $4 dollars. Note that the group that had been asked if they would pay $10 could have asked to be paid to go to the show, but they did not. Preferences for goods, especially experiential goods, are highly context dependent.

I’m not denying there exists certain dispositional tastes. I habitually watch sci fi movies. I have seen more outer space prison break movies than movies made before 1960. But that is a whole different ballgame than presuming it is something as specific as "Foreign Satanic Stories from the 1980s." I like many romantic comedies from the late 80s and early 90s, but to say this links Say Anything and Pretty Woman seems a bit of a stretch. 

This strategy runs into a second problem, overfitting. When sampling from complex, feedback driven systems the model used must be very robust to future deviations. Gerd Gigenrenzer gave a great talk on the robustness of simple models. The two graphs shown below show a problem with this phenomenon.




The first graph shows two different models fitted to yearly temperature data. One model is a 12th degree polynomial, whereas the other is a 3rd degree polynomial. As can be seen the 12th degree polynomial has lower error, it fits the temperature data better. However, as seen from the second graph, the predictive ability of the 12th degree polynomial is much worse. The more one attempts to boil complex systems to singular, large algorithms the more problems one runs into out-of-sampling.

Now it seems I might be painting myself into a corner. I have both stated that simple models predict complex systems, while simultaneously acknowledging that simple stimulus-response systems such as Netflix are insufficient in the face of this complexity.  What I am trying to show is that there is no one algorithm to rule them all. How can simple models be used in the human realm? Utilizing our already existing pattern recognition devices, other people’s brains. By supplanting these algorithms with expert human judgment, we are able to see increased success. Salmon writes another article on exactly this subject:
Nate Silver himself has written thoughtfully about examples of this in his book, The Signal and the Noise. He cites baseball, which in the post-Moneyball era adopted a “fusion approach” that leans on both statistics and scouting. Silver credits it with delivering the Boston Red Sox’s first World Series title in 86 years. Or consider weather forecasting: The National Weather Service employs meteorologists who, understanding the dynamics of weather systems, can improve forecasts by as much as 25 percent compared with computers alone. A similar synthesis holds in eco­nomic forecasting: Adding human judgment to statistical methods makes results roughly 15 percent more accurate. And it’s even true in chess: While the best computers can now easily beat the best humans, they can in turn be beaten by humans aided by computers.

He gives short shrift to where I believe this might be most ground-breaking, in the experiential good market. There is already a model for supplanting algorithmic results with human interaction, and that is “A BetterQueue.” It’s a very simple website that links Rotten Tomatoes and Netflix. The key here is that a person can pre-select categories and rating criteria that will pass through their filter. We have not only human reviewers as a filtering scheme, but also the person who will be experiencing the good itself. I think this is why online dating sites are so successful, as well. For all the hype of the algorithms used to generate potential matches, these systems have that ultimate backstop in that people will generally have to talk to one another before any real commitment is made. So while an algorithm gets people to the door, it is a person who is tasked with figuring out if this is the right one for them. These are types of systems I believe will be key to these markets developing in the future. Ignoring how human judgment and relationships helps to create a good fit for individuals will mean these systems will continue to be sub-optimal recommendation schemes.  

Saturday, November 16, 2013

The Pitfalls of Mental Outsourcing

This post builds somewhat off my last thought. I introduced the idea of coherence generating mechanisms. These are tools meant to synthesize and understand some underlying pattern in the context of social phenomenon. In the realm of music, you can think about Pandora. Via a feedback system, it seeks to hone directly into ones preferences. The more I like R&B songs, the more the system understands that I have a preference for them. The brain also fits this definition. It’s one big pattern recognizer, trying its best to turn thought intensive responses into reflexive impulses. With the rise of “big data” these systems are becoming more ubiquitous, whether it be Google Ads or Netflix’s recommendation systems.

I think the best way to define something is to define what it is not. My idea of coherence generating mechanisms are not simple single lag systems. As in, they do not assume that the future will exactly mirror the past. Thus, the system has to “learn” in some sense. It must seek to create categories of knowledge that it calls upon to utilize in the future. Netflix’s recommendation system fits this since it attempts to utilize both viewing habits and ratings to pin point some locus of movie preference. Other examples will be discussed at length below.

From the standpoint of marketers, the utility of these mechanisms are clear. Strip away all the excess of human activity, and drill down to some innate dispositions and preferences. While I think it is clear people have preferences, be it liking Indian food relative Sischuanese cuisine, I think we need to acknowledge limitations and tradeoffs associated with these approaches. Think about listening to Pandora while you are running. This perfect song comes on that has a beat to keep you moving. (FYI any beats above 145 BPM seem to yield little effect). So you like that song on Pandora. Now three hours later, while sitting in your bedroom, contemplating the reason why cars park in driveways but drive in parkways, that same song comes on. Now in your contemplative state, that song comes off as too much, the beat pulses through your brain disrupting your ruminative state of mind. This situation can be easily resolved if you condition preferences on particular states of mind. But Pandora does not know what you’re currently in the mood for, it only knows the feedback you have generated for it. While these systems feel intelligent, they are simply classic stimulus-response mechanisms.

This feeling of something deeper going on in these systems gives rise to an even more important issue, how does the user feel about this system? What sort of confidence does the user place in it? These systems are built to allow individuals to outsource that impossible task of introspection. Rotten Tomatoes creates a single, unified metric for the quality of the movie. But under the guise of arithmetic precision hides all the underlying biases of the culture industry. Those movies that can garner reviews are the ones that end up on the site. Furthermore, band-wagoning and other group level effects, such as gender and racial bias, can skirt by under the false sense of security that numbers provide. Further studies on how users come to appreciate choices that are derived from these metrics are important, on top of, studies looking at how users utilize these informational channels in further searching behavior.

That the underlying metrics may be flawed does not distract from the way these systems have the potential of opening up users to new domains they have yet to experience. Google Maps gives me the confidence to walk to places I have never walked to. These systems open up the landscape for one to explore. From the perspective of learning, however, do these systems allow the user to internalize the new environment that they are exploring? Research into GPS systems impacts on mental visualizations shows that is not the case. GPS technology trades off with specialized mental tasks that underlie some important parts of cognitive development. Like exercising a muscle, the less users utilize their mental mapping, the harder it is to get it back in gear (Here is a short article detailing ways around this problem.)

These systems do not just have an ability to trade off with internal resources we use, but also resources in the external world, as well. Social communication used to be the route for recommendations in the past. Word of mouth was the primary tool to resolve informational asymmetries in the experiential good market. While studies have found that critic’s reviews seem to be driving revenue generation for films, the question of what these systems do to social ties are beyond the question of simple market analysis. Do we feel less of a need to consult one another because we have Rotten Tomatoes at our finger tips? Do we view our friend’s recommendations with more skepticism if they contradict the “popular” metrics we utilize online? If your friend constantly recommends CDs that MetaCritic deems to be terrible, does your friendship suffer? 

These are just a small sample of possible negative implications of outsourcing our own internal pattern seeking, or coherence generating, mechanisms. The ultimate question from an economic science perspective is do the benefits outweigh the costs. While I believe at the end of the day that most do, the lack of attention to some of the negative ramifications of these systems should cause pause when attempting to run your own cost-benefit analysis.  

Monday, November 11, 2013

Reflektions

I decided to try and squelch some of my more ADD tendencies, so I shut off all outside distraction, turned the lights down, and got to listening to the new Arcade Fire album while perched on my bed. Letting my mind wander, I started to pick up subtle nuances as the album developed. The melody slowly rising and overtaking the struggling Wynn Butler who seems early on in the album to be at war with the instrumentals, attempting to make his voice have meaning over the cacophony of sound. I thought to myself the lay out of this album is masterful, the transitions work beautifully, it builds from start to finish, just to come all crashing down as the first side ends. But this post is not about my feelings persay, it’s about the sense of coherence one feels when listening to an album in its entirety (which I will admit I didn’t even do since I stopped at the first side, my ADD sometimes always win out.)

Is the feeling of coherence intrinsic to the album itself? Most musicians and those around music would say definitely. The songs are placed in their order for some thematic reason that the authors have some privileged access to. It could be stylistic as in building up tempo, or it could be thematic as in many of the Killers’ early albums, taking you through a journey of pain and loss, ending on a bitter and exhausted note. What if a band such as Arcade Fire were to generate 40 songs and then dump them on their label who then have the job of ordering and building up a theme? Would the coherence one may feel from this be less valid than the coherence one feels from an album constructed coherently by the author? Do creation and construction live in separate realms?

To understand this question, one must look from where coherence comes from. For all the best laid plans a band has, people are fickle with regards to feeling how you want them to. The coherence of theme may be lost for the coherence of melody and tempo. In fact, single songs are sometimes put into coherent narratives by our own personal coherence generating mechanism, our brains. Many people experience the feeling of having their Ipod on shuffle, or listening to the radio, and that perfect song coming on at just the right moment. The song order was fundamentally random with no underlying structure, yet you the observer felt some deeper meaning in the songs arrival in your personal arc. In fact, many people hope to discover new connections in their music repertoire with external coherence generating mechanisms, such as ITunes Genius and Pandora. These systems attempt to synthesize an underlying coherence of the listener’s music tastes and map them onto either new or already existing songs within the listener’s repertoire.

Now it seems that I have come to divide creation and construction as two separate realms, in that construction can be reconstruction by the observers, the consumers of music. But this same reconstructing of narratives that listeners do, is the same sort of reconstruction bands do as they are creating albums, and songs. The tempo, notes, and beats of stanzas are constantly being morphed, blended, discarded, folded into larger themes, melodies, cadences of songs, which are then recombined, spliced, manipulated, lengthened into full-fledged albums with their own arcs, narratives and meta-structures. But is this process linear, does it build from stanza, to song to album without any feedback between these interlocking steps?

Music occupies a particular human world of creative expression. Though it is unique, it falls within the realm of human cognition and I think the answer to this quandary may lie there. A simple analogy to the process of language production may be illuminating. Whenever one blurts out something to their friend about how grand their day was, what comes first the meaning of the message or the content used to convey said meaning? Many people instinctively say that meaning comes first, for without the meaning, why would we express that particular phrase? But if the meaning came first, what language was the meaning in? If it was in English, then isn’t the meaning already in words, so where did the words come from to give to the meaning. Many researchers have come down on the side of the co-relationship between meaning and content. Both simultaneously constrain one another. 

The vocabulary we have at our disposal affects the meanings that we express and vice versa. There’s a reason people who speak more than one language often talk about thinking, or reasoning in another language so as to resolve some problem. Linguistic constraints beget reasoning constraints (Dennett, 247). Oliver Sachs has a wonderful, reporting of the constraining aspects of linguistic mediums:

“Communication by motor behavior became a very important part of the transference…[W]ithout knowing it, I was receiving two sets of communication simultaneously: one in words, a form in which the patient ordinarily communicated with me; the other in gestures [signs], as the patient used to communicate with his father. At other times in the transference, the motor symbols represented a gloss upon the verbal text the patient was communicating. These motor symbols contained additional material which either augmented or more likely contradicted what was being communicated verbally. In a sense, “unconscious material” was making its appearance in consciousness by way of motor rather by way of verbal communication (Sachs, 34).”

Transference is a bit much for me, but it is clear that the mode with which reasoning is articulated and rehearsed affects the thoughts that are produced. Verbal and motor communicative structures are different and can construct thoughts in ways that are sometimes at odds. The abacus and calculator are two different methods for the same goal “calculation”, yet the mechanisms used and results generated can diverge.

This same sort of feedback between meaning and content can be seen in the relationships between albums, songs, and their listeners. While the artist may be developing the album, the album is also being particularized in songs, and once those songs are developed they themselves determine where the album is going. This is why outlining is such an important part in the writing process. For all the jumbles in one’s heads, an outline helps to particularize and focus those disparate connections into some sequential process. Once the album is created, the meaning of the artist is now supplanted by the meaning of the listener. The listener brings in their own personal narrative, but those are themselves constrained by the content of the album. There is a reason people do not usually feel particularly sad listening to a Katy Perry album, the content to derive that meaning is not there. The construction/creation divide (or more generally the form/content divide) is less a divide and more of an ecosystem of production that highlights the multilayered process of reasoning. 

-----------------------------------------------------------
References:

Dennett, D. Unconcsiousness Explained. Little, Brown and Company: Boston. 1991.

Sachs, O. Seeing Voices: A Journey into the World of the Deaf. Harper Collins: New York. 1990.

Friday, November 2, 2012

Political Independence

          I’m getting rather perturbed with this constant stream of naysayers, who claim that voting is unimportant. One argument trotted out on a consistent basis goes something like this. A liberal individual voting in a very conservative county, in a severely conservative state, has very little likelihood of swaying national, state-wide, and/or county-wide elections due to the sheer numbers game. One vote out of millions will do nothing to change an election. Many people counter with arguments about one’s civic duty to vote, or how people in other parts of the world are willing to die for the right to vote. All of these arguments are persuasive on moral and philosophical grounds, but if we are dealing with avid utilitarian’s, who are hell bent on claiming that the likelihood one’s vote matters is low, none of these arguments prove that the ends justify the means. What all these counter-arguments miss, however, is that there is a pernicious, downright improper assumption running through these anti-voting epithets. That problem is of course the assumption of independence. When I say independence, I’m talking about probabilistic independence, having no thoughts of anything more patriotic or interesting.

           Let’s take a step back and talk about this sort of independence and then it hopefully will become clear why these anti-voting authors’ assumption does not bode well for how the real world works. All independence says is given two events, A and B, knowing that A has already happened, the likelihood of B happening has not changed, and vice versa. Thus, two events being independent translate into two events that have no relationship to one another. Take this simple example. If I go home and decide to cook myself dinner, the likelihood that my next door neighbor, who I have never met, cooks dinner is unchanged. This is for both physical reasons, we don’t share a living space, and social reasons, we don’t have any means of communicating with one another. However, when I decide to cook dinner, my roommate’s likelihood of cooking dinner does change. Either for a physical reason, I’m taking up the stove so he can’t cook, or a social reason, he sees me cooking dinner and decides he can just take some of my food. So we would say that the likelihoods of me and my next door neighbor cooking dinner are independent, while the likelihoods of me and my roommate cooking dinner are dependent.

         Hopefully, one can see where I’m going with this. All those authors who decry about the likelihood one’s vote matters assume that a person’s choice to vote is independent of everyone else’s choice to vote. If everyone’s decision to vote is independent, then the likelihood that I vote having any sort of meaning on the election is rather small. However, if my decision to vote persuades another family member, or friend in my social network, to vote, and their decision also causes another to vote, then the cascading dependencies can really make the numbers a bit more favorable. Let’s go to a simple statistical example I’ve concocted. In the table below, we can see the joint distributions between two people, person A and person B voting, where they have dependent probabilities:


VOTER B
P( Vote)
P(Not Vote)
VOTER A
P( Vote)
0.5
0.2
P(Not Vote)
0.05
0.25



The table is very simple, so bear with me as I walk you through it. For example, sum across both rows and you get the marginal probabilities of A voting and not voting:


Marg. Prob A Votes = P(Voter A Votes | Voter B Votes) + P(Voter A Votes | Voter B Not Votes) = .7


Marg. Prob A Not Votes = P(Voter A Not Votes | Voter B Votes) + P(Voter A Not Votes | Voter B Not Votes) = .3
   
     
       You’ll notice if we sum these two numbers together we get 1, and that makes sense because Voter A can only vote or not vote, thus they will do either/or 100% of the time. We can now ask questions regarding conditional probabilities. Given that voter B has voted, what is the probability that voter A will also vote? To answer this, we restrict ourselves to only the first column and see that voter A votes with P(Voter A Votes) = .5 and voter A does not vote with P(Voter A Not Votes) = .05. However, the sum of these two are only .55 . So to find the probability that voter A votes when voter B has voted, we simply divide P(Voter A Votes) = .5 by our restricted sample space, the marginal probability of B voting, which is equal to .55 . So we have P(Voter A Votes| Voter  B Votes) =  .5/.55 = .91. We can now verify these two voters are dependent because the likelihood that A votes changes whenever B decides to vote or not vote. So doing the same process we would find P(Voter A Votes | Voter B  Not Votes) = .2/.45 = .44 . Now things are getting interesting, given that B votes the probability that A votes nearly doubles. This means that voter B’s decision to vote also increased the likelihood that voter A would vote by a factor of 2.

       Now let’s complicate things a little bit more. Let’s say that we have a pool of 100 voters, what is the likelihood that a single voter could sway the election? Well if we assume that every person’s vote is independent that is very easy, it’s merely 1 in 100, or .01. However, using our numbers from before let’s say voter B decides to vote before voter A. So voter B has a probability of .01 of swinging the election, but because her decision to vote has that added benefit of increasing the probability that A votes by .91 - .44 = .47, the likelihood that voter B’s vote is the decider is now .01 + .01*.47 = .0147.

      What this example is trying to illustrate is that if there are externalities to voting then the likelihood one’s vote matters can start to creep upwards. Given that our social networks are increasing faster and faster with the plethora of social media, these small spillovers start to add up. However, is a person’s choice to vote independent of everyone else’s choice to vote? Recent work by Betsy Sinclair at the University of Chicago, which looked at political canvassers in Los Angeles, seems to suggest that the independence assumption is incorrect. Networks influence political activity via social pressure to conform. Previous research had found that people in social networks where with high rates of political activity conform to social pressure and are more likely to acquiesce and vote. The important point in Sinclair’s work is that the messenger matters, if the politically active person trying to gin up votes is not in the same group as the person they are persuading then the effects on political behavior is small. However, if both people are members of similar groups, then the effects may be very large. For hermits that lack any social networks, the likelihood that there vote matters is infinitesimally small, however, for those people inculcated in vast networks of friends, families, and co-workers, their vote may make more of a difference than pundits make you believe.

CITED:

Sinclair, B., McConnel, M., and Michelson, M. “Local Canvassing and Social Pressure: The Efficacy of Grassroots Voter Mobilization.” Forthcoming Political Communication, July 7, 2010.

Friday, July 27, 2012

Objectively Subjective

Something that has always bugged me is human’s ability to translate subjective, mental information about probabilities, happiness, guilt etc. into objective, quantifiable numbers. For me the problem really became pronounced when reading a paper asking students to assess from 0-100% how likely they felt their answer to a simulated SAT question was correct. Say that you are given 5 choices: A, B, C, D, and E. Ignore for a moment the inherent difficulty in forecasting using data, and instead focus on the process one would go through to make the assessment of how certain they are. First, you would have to think how well do I know this subject area? If your answer is ‘not too well’, then you have to translate that ‘not too well’ into some range. Let’s say I have a 30% chance I know the correct answer with certainty. This number is compared to the 20% chance that if you guess, you got it right. However, now you start going through the answers themselves and determine how ‘reasonable’ they seem. The mental machinations may exclude one obviously incorrect choice, but now we’re stuck with what do we mean by ‘reasonable’ in the context of some finite number. Let’s say I’m 100% certain D is wrong is wrong and 85% sure E is wrong. Knowing this means we might as well only select from A, B and C. With this restricted choice set, my probability of being right, incorporating my prior belief state, is about 35.29%, now isn’t that a nice number? However, we are ignoring one key problem, my initial assumption that I was 30% certain I knew the right answer. How am I to know that because I got cut off earlier in the day by some bozo, I am now just a little more pessimistic in my outlook? Because of me being in this “hot state”, I shave off 10% from my initial assumption. 
            
           This question gets even more interesting when looking at how people make absolute comparisons. George Miller, who recently passed away, was a pioneer in the field of short-term memory, writing a now rather famous paper entitled, ‘The Magical Number SevenPlus or Minus Two: Some Limits on our Capacity for Processing Information’.  One experiment of particular interest to this discussion deals with peoples’ ability to discern differences in tones, Prof. Miller sums it up nicely:

“When only two or three tones were used the listeners never confused them. With four different tones confusions were quite rare, but with five or more tones confusions were frequent. With fourteen different tones the listeners made many mistakes. These data are plotted in Fig. 1. Along the bottom is the amount of input information in bits per stimulus. As the number of alternative tones was increased from 2 to 14, the input information increased from 1 to 3.8 bits. On the ordinate is plotted the amount of transmitted information. The amount of transmitted information behaves in much the way we would expect a communication channel to behave; the transmitted information increases linearly up to about 2 bits and then bends off toward an asymptote at about 2.5 bits. This value, 2.5 bits, therefore, is what we are calling the channel capacity of the listener for absolute judgments of pitch. 
 So now we have the number 2.5 bits. What does it mean? First, note that 2.5 bits corresponds to about six equally likely alternatives. The result means that we cannot pick more than six different pitches that the listener will never confuse. Or, stated slightly differently, no matter how many alternative tones we ask him to judge, the best we can expect him to do is to assign them to about six different classes without error. Or, again, if we know that there were N alternative stimuli, then his judgment enables us to narrow down the particular stimulus to one out of N /6.”
The takeaway from all his results is that humans have an innate capacity to make an absolute judgment among 7 different items on a uni-dimensional scale. For example, if someone were given 14 shades of green and asked which ones are different, humans would usually say that 7 of them are the same and 7 are different. Now one must remember that these are questions about single dimensions of OBJECTIVELY knowable items, like color, sound, taste etc. The world out there is filled with the unknown. When pollsters and academics ask questions to subjects relating to “enthusiasm to vote”, “dislike with the president’s economic policy” or “probabilities that your answers are right”, participants are doing their best to bring all these factors together and spit out a number. What I’m saying is that the results that these processes glean may be telling us little about people’s true tastes and instead depend heavily on how many choices participants are given, as well as, other factors related to framing of the questions asked. 

Thursday, September 1, 2011

Redistributing Job Insecurity

President Obama seems to be in support of new legislation barring employers from discriminating against the unemployed. Via Catherine Rampell, here is President Obama:

"Well, there is no doubt that folks who have been unemployed longer than six months have a tougher time getting back into the job market. Now, the single most important thing we can do is just have the economy strong so that employers aren’t as choosy because they’ve got to hire because their businesses are expanding.

But we have seen instances in which employers are explicitly saying we don’t want to take a look at folks who’ve been unemployed. Well, that makes absolutely no sense, and I know there’s legislation that I’m supportive of that says you cannot discriminate against folks because they’ve been unemployed, particularly when you’ve seen so many folks who, through no fault of their own, ended up being laid off because of the difficulty of this recession."


This seems to be in line with much of the administration's efforts to try and get those long-term unemployed back to work. Recent musings on the Georgia Program, which seeks to offer government support to get employees trained in new skills, fit in this vein. While it is true that the reason for the massive increase in the duration of the unemployment is because of a concentration of unemployment in those group of older workers that can't seem to find any new job, trying to tackle a demand problem by getting these people back to the work force will do little but redistribute the burdens of unemployment. For example, if a firm decides to go on hiring binge and they say for every person they hire they will fire one of their current workers, most would respond well that's not really hiring at all. The firm is merely swapping one person from the unemployment line for another. This approach to addressing unemployment is exactly the same formula the administration is using. Without some exogenous force pushing demand up, anyone of these people that may be hired will still be hired into firms that are very reluctant to expand. These people will still be competing in a very tight labor market, where jobs are scarce. While the probability of these long-term unemployed getting hired increases with these proposals, this merely reduces the probability of anyone else seeking a job from getting one. A swing and miss I would have to say.