The widespread use of social media has led to an increase in false and misleading information presented as legitimate news, also known as fake news. This poses a threat to societal stability and has led to the development of fake news detectors that use machine learning to flag suspicious information. However, existing fake news detection models are vulnerable to attacks by malicious actors who can manipulate data to change predictions. Research on attacks on news comments is limited, and current attack models are easily detectable. We propose two new attack strategies that instead use real, pre-existing comments from the same dataset as the news article to fool fake news detectors. Our experimental results show that fake news detectors are less robust to our proposed attack strategies than existing methods using pre-existing human-written comments, as well as a malicious synthetic comment generator.