Welcome back!
In part #1, you would likely earn partial credit. When discussing bias on the AP exam, you typically have to do 3 things: (1) explain the source of the bias (“how” it happens), (2) explain the reason for that source existing (“why” it happens), and (3) explain the impact on the result (“what” happens). When reading your response, I see evidence for #1 and #3 - you mention “not everyone responded to the survey” (#1 - how) and that this will probably “underestimate the true proportion” (#3 - the impact). To my eyes, though, your response does not address why this happens and why the non-response will lead to an underestimate, which would imply that the 37.8% is lower than the true proportion if we actually asked everyone (and it would be maybe more like 50% or something like that). You would need to make an argument for *why the people who responded to the survey are more likely to say no and thus produce an underestimate * - perhaps they are strongly opposed to taxes of any kind, or the wording of the question made them feel like their money could be better spent elsewhere. Whatever you decide is the case, you should present and defend why it impacts the responses. For nonresponse to turn into nonresponse bias, the people who do participate must be more likely to answer a certain way than the people who don’t participate.
Additionally, while I’m not assuming this is the case, I often have students misunderstand that getting responses from fewer people than you expect does not automatically produce an underestimate. “Underestimate” specifically refers to the proportion/mean/whatever-statistic-is-being-measured being lower than would be reflected in the population. A small and biased sample can produce an overestimate just as easily as an underestimate - perhaps in this scenario we ask a small group of people who live near roads with lots of potholes what they think. They would be likely to support the city’s proposal more than others, and therefore produce an overestimate. [OK, thanks for coming to my TED Talk about bias. On to the next part…]
In part #2, we have a little bit of reviewing to do. In part (a), you correctly interpret what a 95% confidence interval is, but that is not the same as a confidence level. A confidence level represents a “long-run capture rate” that is then reflected in each specific confidence interval. You can check out an overview from a previous stream at this link - it’s time-stamped to the part you’d need. The correct answer in this case would sound something like “if we were to take many, many random samples of 300 city residents and ask them the question, about 95% of the confidence intervals we constructed would capture the correct value for p, the proportion of all city residents who would respond yes to the question.”
For #2 part (b), you’ve also committed a relatively common error, in that while it is true that 50% is in fact in the interval, the presence of other, smaller values in the interval provides evidence against the claim that at least 50% of residents support the proposal. It’s just as plausible that 48.5%, or 49%, or 49.9% would say “yes”. And since all values within a confidence interval are considered “reasonable” values for p, we cannot say with confidence that the true population proportion is at least 50%. We could only say that if the entire interval is 50% of higher - for example, (0.512, 0.592).
In part ( c ), you give the correct rationale for the “large counts” condition - short and to the point! This would earn full credit.
All in all, there are some little things to clean up. I’m hoping my feedback helps - let me know if you have any follow-up questions!
~Jerry