Assignment 1 failing tests

My code comes out the way I expected it to but I am not too sure on what is failing. I have gone through the error details but I am not too sure what is being explained. I have a few console.clear as well as the candidate name not being a template literal. Could these be the issues that are causing a failure?

Can you post the error details from github? I failed several checks because github expected the questions to contain a certain number of spaces, or the wording to be slightly different from the assignment directions. Once I changed my wording to what github expected I got the check to pass.

Hopefully this takes you to the errors.

Same issue I had at first. It is expecting your questions to be typed in specifically like github wants, not like it was worded in the assignment directions.

Expected [ ‘Who was the first American woman in space?:’, ‘True or False: 5000 meters = 5 kilometers?:’, ‘(5 + 3)/2 * 10 =’, ‘Given the array [8, ‘Orbit’, ‘Trajectory’, 45], what entry is at index 2?:’, ‘What is the minimum crew size for the International Space Station (ISS)?:’ ] to contain 'Who was the first American woman in space? '.

If you change your question to be worded exactly like it says after “to contain” it should fix this error. You will have to do this for each expected/to contain error on github.

Thanks for looking through the code. I managed to understand what the errors were after you explained the issue.

Thanks for posting this - helped me - in my submission I asked LC if they intended to have differences between array descriptions in instructions and GitHub autograder? If so, to give us a hint (as I spent a LOT of time trying to figure out) or if not to fix instructions please.

Hi @rlp390, @miclamb241 & @Ebbie - I’m hoping you guys can help me? I just committed my code but it failed in GitHub (works perfectly in Repl.it). I’m sure it’s failing because the GitHub autograder expects different descriptions like you said above. Can you tell me where I can find the details of this within GitHub? In other words, is there a certain place in GitHub after I’ve committed & it’s failed where GitHub specifies what needs correcting?
Here is my GitHub: GitHub - LaunchCodeEducationClassrooms/assignment-1-candidate-testing-em-graham: assignment-1-candidate-testing-em-graham created by GitHub Classroom

Guys sorry never mind, I found it in the end!
If anyone else is wondering, I found it by clicking on 7 digit code next to what was the red X, then Actions, All Workflows, Clicking on my comment, then Autograding.
That will walk you through all of the “errors” that autograding found and as @rlp390 explained above, simply change it to whatever is after “contain”.

I am down to one error, but it’s got me stumped. This is the error:

Error: Expected [ 'Who was the first American woman in space? ', 'True or false: 5 kilometer == 5000 meters? ', '(5 + 3)/2 * 10 = ? ', 'Given the array [8, “Orbit”, “Trajectory”, 45], what entry is at index 2? ', 'What is the minimum crew size for the ISS? ’ ] to contain 'Given the array [8, ‘Orbit’, ‘Trajectory’, 45], what entry is at index 2? '.

It’s only tripping on the quotation marks around ‘Orbit’ and ‘Trajectory’, but it I change the quotation marks then the program thinks they are each a function and won’t run which gives a different error. Does anyone know a way around this?

If anyone else is having trouble with this, it doesn’t trigger the error if you change the quotation marks around the phrase instead of the quotation marks around “orbit” and “trajectory”.
example:
"Given the array [8, ‘Orbit’, ‘Trajectory’, 45], what entry is at index 2? ', 'What is the minimum crew size for the ISS? "

Thanks to the help above, I was able to narrow down my issue to an error, that I can’t seem to figure out. Can anyone shed some light?

UPDATE: with the info provided above about how to read the Autograding errors, I was able to finally get it passed. You guys rock, thanks!

@emgraham - also, to find the exact wording expected for the arrays (prior to commit/push) you can also go to the spec file folder and open candidate-testing-spec.js and see exactly what they’re grading for - it appears that possibly not everyone’s descriptions are even the same.