Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Meta

Post History

60%
+1 −0
Meta Requirements of posts in Rigorous Science

Question requirements I think it definitely falls on the asker to demonstrate first and foremost that the basic tenets of their idea are feasible. Their question should show that they've done a go...

posted 4y ago by HDE 226868‭  ·  edited 4y ago by HDE 226868‭

Answer
#3: Post edited by user avatar HDE 226868‭ · 2020-06-04T01:28:48Z (almost 4 years ago)
  • # Question requirements
  • I think it definitely falls on the asker to demonstrate first and foremost that the basic tenets of their idea are feasible. Their question should show that they've done a good deal of thinking about their problem already. They don't have to know the field intimately, of course, but they should make an argument that it's possible to approach the problem rigorously.
  • This could include citing a paper they've found or doing a quick order-of-magnitude calculation. Again, it would be unreasonable to expect the asker to exhibit technical knowledge, but you don't necessarily need to be an expert to do some basic thinking about plausibility.
  • # Answer requirements
  • On Worldbuilding, the [requirements currently in place](https://worldbuilding.stackexchange.com/tags/hard-science/info) for an answer to a hard-science question are that it be backed up by at least one (and ideally more) of the following:
  • - Equations
  • - Empirical evidence
  • - Scientific papers
  • - "Other citations"
  • Looking at it with fresh eyes, I'm honestly not a fan (and, for context, I've been one of the forces behind the tag). It's not clear to me that the first, second and fourth points are useful. By the letter of the rule, you could misapply equations, rely on shaky "evidence", and quote a poorly-written press release, and still satisfy the given criteria. Heck, many Wikipedia articles could be valid, yet there are certainly dangerously inaccurate Wikipedia pages out there.
  • I'm going to recommend that we axe those three ideas, and whittle the restrictions down to mandate that scientific papers (or conference proceedings) must be cited, and must constitute the backbone of an answer. Preferably, this means that the paper has been cited in a legitimate, peer-reviewed journal - no vanity publishers or predatory journals. This should indicate that the claims therein are at the very least reasonable enough from an expert's point of view. Not all scientific papers are widely accepted, but widely-accepted scientific ideas typically come from papers.
  • I'm on the fence about allowing preprints. I recall at least one case on Worldbuilding where someone cited an arXiv submission (in response to a non-hard science question); I read it twice and found that it was littered with errors and plagiarism. Yet that answer received dozens of upvotes. I'd be inclined to discourage preprints except in extenuating circumstances.
  • This is going to mean that in many cases folks will have to either do literature searches or start from a place that cites papers (e.g. Wikipedia) and then dive in to the works themselves. That itself may discourage people from answering - it's not easy to properly do a lit search. Parsing a paper is usually even harder, particularly if non-technical descriptions of it, such as press releases, cover it inaccurately. I'm thinking back to [recent inaccurate sensationalist coverage](https://www.newscientist.com/article/mg24532770-400-we-may-have-spotted-a-parallel-universe-going-backwards-in-time/) of [a rather more mundane result](https://arxiv.org/abs/2001.01737).
  • Is it worth the trouble? I think so. The number of these rigorous questions we get is likely not going to be high, and so the majority of users presumably will not have to go through the process of doing the research and writing these answers.
  • # Question requirements
  • I think it definitely falls on the asker to demonstrate first and foremost that the basic tenets of their idea are feasible. Their question should show that they've done a good deal of thinking about their problem already. They don't have to know the field intimately, of course, but they should make an argument that it's possible to approach the problem rigorously.
  • This could include citing a paper or other resource they've found, or doing a quick order-of-magnitude calculation. Again, it would be unreasonable to expect the asker to exhibit technical knowledge, but you don't necessarily need to be an expert to do some basic thinking about plausibility.
  • # Answer requirements
  • On Worldbuilding, the [requirements currently in place](https://worldbuilding.stackexchange.com/tags/hard-science/info) for an answer to a hard-science question are that it be backed up by at least one (and ideally more) of the following:
  • - Equations
  • - Empirical evidence
  • - Scientific papers
  • - "Other citations"
  • Looking at it with fresh eyes, I'm honestly not a fan (and, for context, I've been one of the forces behind the tag). It's not clear to me that the first, second and fourth points are useful. By the letter of the rule, you could misapply equations, rely on shaky "evidence", and quote a poorly-written press release, and still satisfy the given criteria. Heck, many Wikipedia articles could be valid, yet there are certainly dangerously inaccurate Wikipedia pages out there.
  • I'm going to recommend that we axe those three ideas, and whittle the restrictions down to mandate that scientific papers (or conference proceedings) must be cited, and must constitute the backbone of an answer. Preferably, this means that the paper has been cited in a legitimate, peer-reviewed journal - no vanity publishers or predatory journals. This should indicate that the claims therein are at the very least reasonable enough from an expert's point of view. Not all scientific papers are widely accepted, but widely-accepted scientific ideas typically come from papers.
  • I'm on the fence about allowing preprints. I recall at least one case on Worldbuilding where someone cited an arXiv submission (in response to a non-hard science question); I read it twice and found that it was littered with errors and plagiarism. Yet that answer received dozens of upvotes. I'd be inclined to discourage preprints except in extenuating circumstances. (They also might open up to door to other dubious half-baked ideas, and content sources like [viXra](https://en.wikipedia.org/wiki/ViXra), which is . . . not to be trusted.)
  • This is going to mean that in many cases folks will have to either do literature searches or start from a place that cites papers (e.g. Wikipedia) and then dive in to the works themselves. That itself may discourage people from answering - it's not easy to properly do a lit search. Parsing a paper is usually even harder, particularly if non-technical descriptions of it, such as press releases, cover it inaccurately. I'm thinking back to [recent inaccurate sensationalist coverage](https://www.newscientist.com/article/mg24532770-400-we-may-have-spotted-a-parallel-universe-going-backwards-in-time/) of [a rather more mundane result](https://arxiv.org/abs/2001.01737).
  • That point is really the crux of my argument that we should either require (or, at the very least *strongly* recommend) that folks cite papers in their answers. It's extremely easy for the actual conclusions to get distorted when reading secondary sources, even if those sources are doing possible to avoid sensationalism and stick to the facts. I'm not saying you can't use secondary sources to support your claims or maybe make your explanations clearer or less technical - you certainly should, if that will increase readability and clarity - but I wouldn't want to see them be the sole pillars of an answer.
  • Is it worth the trouble? I think so. The number of these rigorous questions we get is likely not going to be high, and so the majority of users presumably will not have to go through the process of doing the research and writing these answers.
#2: Post edited by user avatar HDE 226868‭ · 2020-06-02T16:46:36Z (almost 4 years ago)
  • # Question requirements
  • I think it definitely falls on the asker to demonstrate first and foremost that the basic tenets of their idea are feasible. Their question should show that they've done a good deal of thinking about their problem already. They don't have to know the field intimately, of course, but they should make an argument that it's possible to approach the problem rigorously.
  • This could include citing a paper they've found or doing a quick order-of-magnitude calculation. Again, it would be unreasonable to expect the asker to exhibit technical knowledge, but you don't necessarily need to be an expert to do some basic thinking about plausibility.
  • # Answer requirements
  • On Worldbuilding, the [requirements currently in place](https://worldbuilding.stackexchange.com/tags/hard-science/info) for an answer to a hard-science question are that it be backed up by at least one (and ideally more) of the following:
  • - Equations
  • - Empirical evidence
  • - Scientific papers
  • - "Other citations"
  • Looking at it with fresh eyes, I'm honestly not a fan (and, for context, I've been one of the forces behind the tag). It's not clear to me that the first, second and fourth points are useful. By the letter of the rule, you could misapply equations, rely on shaky "evidence", and quote a poorly-written press release, and still satisfy the given criteria. Heck, many Wikipedia articles could be valid, yet there are certainly dangerously inaccurate Wikipedia pages out there.
  • I'm going to recommend that we axe those three ideas, and whittle the restrictions down to mandate that scientific papers must be cited, and must constitute the backbone of an answer. Preferably, this means that the paper has been cited in a legitimate, peer-reviewed journal - no vanity publishers or predatory journals. This should indicate that the claims therein are at the very least reasonable enough from an expert's point of view. Not all scientific papers are widely accepted, but widely-accepted scientific ideas typically come from papers.
  • I'm on the fence about allowing preprints. I recall at least one case on Worldbuilding where someone cited an arXiv submission (in response to a non-hard science question); I read it twice and found that it was littered with errors and plagiarism. Yet that answer received dozens of upvotes. I'd be inclined to discourage preprints except in extenuating circumstances.
  • This is going to mean that in many cases folks will have to either do literature searches or start from a place that cites papers (e.g. Wikipedia) and then dive in to the works themselves. That itself may discourage people from answering - it's not easy to properly do a lit search. Parsing a paper is usually even harder, particularly if non-technical descriptions of it, such as press releases, cover it inaccurately. I'm thinking back to [recent inaccurate sensationalist coverage](https://www.newscientist.com/article/mg24532770-400-we-may-have-spotted-a-parallel-universe-going-backwards-in-time/) of [a rather more mundane result](https://arxiv.org/abs/2001.01737).
  • Is it worth the trouble? I think so. The number of these rigorous questions we get is likely not going to be high, and so the majority of users presumably will not have to go through the process of doing the research and writing these answers.
  • # Question requirements
  • I think it definitely falls on the asker to demonstrate first and foremost that the basic tenets of their idea are feasible. Their question should show that they've done a good deal of thinking about their problem already. They don't have to know the field intimately, of course, but they should make an argument that it's possible to approach the problem rigorously.
  • This could include citing a paper they've found or doing a quick order-of-magnitude calculation. Again, it would be unreasonable to expect the asker to exhibit technical knowledge, but you don't necessarily need to be an expert to do some basic thinking about plausibility.
  • # Answer requirements
  • On Worldbuilding, the [requirements currently in place](https://worldbuilding.stackexchange.com/tags/hard-science/info) for an answer to a hard-science question are that it be backed up by at least one (and ideally more) of the following:
  • - Equations
  • - Empirical evidence
  • - Scientific papers
  • - "Other citations"
  • Looking at it with fresh eyes, I'm honestly not a fan (and, for context, I've been one of the forces behind the tag). It's not clear to me that the first, second and fourth points are useful. By the letter of the rule, you could misapply equations, rely on shaky "evidence", and quote a poorly-written press release, and still satisfy the given criteria. Heck, many Wikipedia articles could be valid, yet there are certainly dangerously inaccurate Wikipedia pages out there.
  • I'm going to recommend that we axe those three ideas, and whittle the restrictions down to mandate that scientific papers (or conference proceedings) must be cited, and must constitute the backbone of an answer. Preferably, this means that the paper has been cited in a legitimate, peer-reviewed journal - no vanity publishers or predatory journals. This should indicate that the claims therein are at the very least reasonable enough from an expert's point of view. Not all scientific papers are widely accepted, but widely-accepted scientific ideas typically come from papers.
  • I'm on the fence about allowing preprints. I recall at least one case on Worldbuilding where someone cited an arXiv submission (in response to a non-hard science question); I read it twice and found that it was littered with errors and plagiarism. Yet that answer received dozens of upvotes. I'd be inclined to discourage preprints except in extenuating circumstances.
  • This is going to mean that in many cases folks will have to either do literature searches or start from a place that cites papers (e.g. Wikipedia) and then dive in to the works themselves. That itself may discourage people from answering - it's not easy to properly do a lit search. Parsing a paper is usually even harder, particularly if non-technical descriptions of it, such as press releases, cover it inaccurately. I'm thinking back to [recent inaccurate sensationalist coverage](https://www.newscientist.com/article/mg24532770-400-we-may-have-spotted-a-parallel-universe-going-backwards-in-time/) of [a rather more mundane result](https://arxiv.org/abs/2001.01737).
  • Is it worth the trouble? I think so. The number of these rigorous questions we get is likely not going to be high, and so the majority of users presumably will not have to go through the process of doing the research and writing these answers.
#1: Initial revision by user avatar HDE 226868‭ · 2020-06-02T14:53:15Z (almost 4 years ago)
# Question requirements

I think it definitely falls on the asker to demonstrate first and foremost that the basic tenets of their idea are feasible. Their question should show that they've done a good deal of thinking about their problem already. They don't have to know the field intimately, of course, but they should make an argument that it's possible to approach the problem rigorously.

This could include citing a paper they've found or doing a quick order-of-magnitude calculation. Again, it would be unreasonable to expect the asker to exhibit technical knowledge, but you don't necessarily need to be an expert to do some basic thinking about plausibility.

# Answer requirements

On Worldbuilding, the [requirements currently in place](https://worldbuilding.stackexchange.com/tags/hard-science/info) for an answer to a  hard-science question are that it be backed up by at least one (and ideally more) of the following:

 - Equations
 - Empirical evidence
 - Scientific papers
 - "Other citations"

Looking at it with fresh eyes, I'm honestly not a fan (and, for context, I've been one of the forces behind the tag). It's not clear to me that the first, second and fourth points are useful. By the letter of the rule, you could misapply equations, rely on shaky "evidence", and quote a poorly-written press release, and still satisfy the given criteria. Heck, many Wikipedia articles could be valid, yet there are certainly dangerously inaccurate Wikipedia pages out there.

I'm going to recommend that we axe those three ideas, and whittle the restrictions down to mandate that scientific papers must be cited, and must constitute the backbone of an answer. Preferably, this means that the paper has been cited in a legitimate, peer-reviewed journal - no vanity publishers or predatory journals. This should indicate that the claims therein are at the very least reasonable enough from an expert's point of view. Not all scientific papers are widely accepted, but widely-accepted scientific ideas typically come from papers.

I'm on the fence about allowing preprints. I recall at least one case on Worldbuilding where someone cited an arXiv submission (in response to a non-hard science question); I read it twice and found that it was littered with errors and plagiarism. Yet that answer received dozens of upvotes. I'd be inclined to discourage preprints except in extenuating circumstances.

This is going to mean that in many cases folks will have to either do literature searches or start from a place that cites papers (e.g. Wikipedia) and then dive in to the works themselves. That itself may discourage people from answering - it's not easy to properly do a lit search. Parsing a paper is usually even harder, particularly if non-technical descriptions of it, such as press releases, cover it inaccurately. I'm thinking back to [recent inaccurate sensationalist coverage](https://www.newscientist.com/article/mg24532770-400-we-may-have-spotted-a-parallel-universe-going-backwards-in-time/) of [a rather more mundane result](https://arxiv.org/abs/2001.01737).

Is it worth the trouble? I think so. The number of these rigorous questions we get is likely not going to be high, and so the majority of users presumably will not have to go through the process of doing the research and writing these answers.