There was no need to reaffirm my own conclusion, dipshit.
I know. I just wanted to point out your stupidity for my own amusement, you little shit fly.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
There was no need to reaffirm my own conclusion, dipshit.
I don't know where you came up with that shit, but they only ask three simple questions in this poll of likely voters. The methodology proved sound the one other time it was tried, in the 2012 presidential election. That poll, conducted by the same people under the name RAND, was the third most accurate in predicting the outcome of that race, as I recall. The methodology may prove sound again, and it may not. We shall see.
You must think the people at USC and the LA Times are fools.
I don't know where you came up with that shit, but they only ask three simple questions in this poll of likely voters. ...
Where I came up with that shit was actually reading the page you linked, their description of the methodology used, and their own FAQ.
http://cesrusc.org/election/
Top of the main page:
The 2016 USC Dornsife / LA Times Presidential Election Poll represents a pioneering approach to tracking changes in Americans' opinions throughout a campaign for the White House.
...
From the "survey methodology" tab:
The Daybreak Poll is based on an internet probability panel survey. Daybreak Poll members are participants in the ongoing UAS internet probability panel of about 5,000 U.S. residents who were randomly selected from among all households in the United States.
From their FAQ:
Frequently Asked Questions
Q: Do you use a likely voter model?
A: No, the respondents provide us with their own subjective probability of voting and we use that to weight their responses.
Q: Who is eligible to participate in the Daybreak poll?
A: Any US citizen 18 and older
Q: How does UAS select respondents within a household?
A: All household members 18 and older are invited to participate
It's an internet poll of randomly selected households. NOT using likely voters, just anyone who claims to be over the age of 18 in the household.
in short, you found a poll who's results you agreed with and completely ignored how it was conducted.
I don't see what you're complaining about. Of course the participants in the poll come from randomly selected households. That's how legitimate polls are done.
I followed the RAND poll in 2012, and was impressed. I'm following this one for the same reasons.
Apparently USC and the LA Times were impressed, also.
I'm not so sure that's accurate:
http://www.latimes.com/politics/la-na-pol-usc-daybreak-poll-methodology-20160714-snap-story.html
Each day's poll respondents are a subset of the UAS election panel, roughly 3000 U.S. citizens who were randomly recruited from among all households in the United States. Respondents are asked three predictive questions: What is the percent chance that... (1) you will vote in the presidential election? (2) you will vote for Clinton, Trump, or someone else? and (3) Clinton, Trump or someone else will win?
It appears they try to discern who likely voters are from a group of "3000 U.S. citizens" by asking the percent chance they will vote from the subset of 400 they call each day.
One potential issue noted about the poll's methodology ...
http://www.nytimes.com/2016/08/09/u...or-donald-trump-has-a-major-problem.html?_r=0
One factor that could be contributing to the panel’s tilt toward Mr. Trump is its decision to weight its sample according to how people say they voted in 2012.
The pollsters ask respondents whether they voted for President Obama or Mitt Romney. They then weight the sample so that Obama voters represent 27 percent of the panel and Romney voters represent 25 percent, reflecting the split of 51 percent to 47 percent between the two among actual voters in 2012. (The rest include newly eligible voters and those who stayed home.)
This is a seemingly straightforward choice. After all, why wouldn’t you want the poll to include the right number of voters for Mr. Obama and Mr. Romney? But very few high-quality public surveys — in fact, none that I’m aware of — regularly use self-reported past voting to adjust their samples.
There’s a very good reason: People just don’t seem to report their past vote very accurately. Answers tend to wind up biased toward the winner; often, people who vote for the loser say they “can’t remember” or say they voted for someone else.
...
This is not a new phenomenon. Back in 2012, Pew’s surveys showed Mr. Obama ahead by 34 to 25 among voters from 2008. If you have a really long memory, you might even remember controversy about polls that showed people recalled voting for George W. Bush over Al Gore in 2000 by a comfortable margin in 2004 polls, even though Mr. Gore won the popular vote. But it’s not perfectly consistent, either: With Mr. Bush’s popularity flagging in 2007 and 2008, more polls started showing that voters recalled voting for John Kerry in 2004.
RealClear Politics includes this poll in their averages, and has referred to it as a poll of likely voters. It seems obvious to me the point of the methodology is to find the voting intent of likely voters, not unlikely voters.
The 3000 aren't "likely" voters. The poll attempts to discern how likely the 400 they poll of that static group each day are likely to vote by asking the respondent the percent chance they will vote.
More from the NY Times article ...
With these figures in mind, the U.S.C./LAT poll’s decision to weight its sample to 27 percent for Mr. Obama and 25 percent for Mr. Romney is quite risky. If the panelists, like those in other surveys, are likelier to recall voting for the winner (Mr. Obama), then the poll is unintentionally giving extra weight to Republican voters. Or you can imagine a counterfactual: If the poll were weighted to 33 percent for Obama and 25 percent for Romney (per the NYT/CBS numbers), then Mrs. Clinton would hold a more comfortable lead.
It's an interesting approach, but it does have some potentially serious flaws in weighting. It'll be interesting if the self-identification of vote in the previous election changes much over the coming months.
Every poll relies on people being honest. Nothing unusual about this one in that regard.
Appealing to authority has too many flaws, especially when that authority (RCP) just averages a bunch of polls without regard to the various methodologies of each. I agree that finding the intent of likely voters is the most desired approach, but this poll's reliability is very suspect because of the issues related in the link I provided above.
Yes, it is an interesting approach. It was highly successful in predicting the outcome of the popular vote in the 2012 presidential election. I like the fact the people who conduct the poll say that may have been luck.
Agreed, every poll has to be scrutinized for reliability issues. This poll has a serious one with it's approach to weighting that could well be skewing the poll. Time will tell, but it's clearly an outlier so far which brings more scrutiny to the approach it takes.
Successful or just lucky? Has anyone looked at their 2012 results to determine that? That's all part of determining a poll's reliability, which is a critical part of determining the poll's validity.
And as a daily tracking poll, how valid is the result today versus closer to election day? As the article I posted said quite well ...
Back in 2012, Pew’s surveys showed Mr. Obama ahead by 34 to 25 among voters from 2008. If you have a really long memory, you might even remember controversy about polls that showed people recalled voting for George W. Bush over Al Gore in 2000 by a comfortable margin in 2004 polls, even though Mr. Gore won the popular vote. But it’s not perfectly consistent, either: With Mr. Bush’s popularity flagging in 2007 and 2008, more polls started showing that voters recalled voting for John Kerry in 2004.
The appeal to the authority of RCP was already made by Ulaven, I was just throwing it back at him.
But you threw it at me as well. You seem very defensive of this poll. I'm just pointing out clear issues that involve the reliability of it as a daily snapshot of likely voters.
It's scientific validity as a predictive poll of likely voters has yet to be shown. Do you have anything that's been published that does?
I posted something earlier in the thread about the RAND poll, if you're interested. It's pretty detailed.
I don't see why you are more critical of this poll than any other poll being done. They all have vulnerabilities.
The one value I see in this poll is the potential to track the shift in support because it does use a static group to survey. However, because of the weighting methodology that has a documented history of skewing results prior to election day, I don't put much confidence in the actual percentages today.
As I've said previously, this methodology has only been used once, and it was successful, or lucky, in predicting the outcome of the popular vote accurately in the presidential race of 2012. It may be successful, or lucky, this year, or it may not. We shall see.
I'm critical of it because it's what is being presented to analyze by you. If you want to present others, I'll critically analyze them as well.
I'll take that as "no".