Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to the Power Users community on Codidact!

Power Users is a Q&A site for questions about the usage of computer software and hardware. We are still a small site and would like to grow, so please consider joining our community. We are looking forward to your questions and answers; they are the building blocks of a repository of knowledge we are building together.

When running a query with ChatGPT with search enabled, can I restrict the search to only certain domain(s)?

+0
−2

When running a query with ChatGPT with search enabled, can I restrict the search to only certain domain(s)?

For example, only search https://genai.stackexchange.com/


Crossposts:

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

I don't think the answer to your cross-post on GenAI SE is correct. If the whole prompt is parsed by ... (7 comments)

1 answer

+1
−0

The answer appears to be no, because in order to do so, there would have to be a system in place to filter both the training material and the set of web pages that an auxiliary search is limited to, and this system would have to be available for configuration to the end user. This isn't the case. However, a seemingly sufficient workaround has been given in the answer to the crosspost on GenAI.SE:

It's a chatbot. Simply ask it to restrict its search.

If one is willing to use generative AI with its inherent drawbacks, then this drawback should be reasonable to accept as well.

If the whole prompt is parsed by an LLM normally, then the following text that is generated is merely an approximation, a statistical likelihood, and not a guaranteed outcome. In other words, it's up to the LLM itself if it wants to respect your requirement. The LLM itself is, to the best of my knowledge, not designed to be able to do that, because it doesn't reason; it doesn't comply with strict rules, because the prompt isn't parsed to generate strict rules. It's read as tokens to generate new tokens.

In other words, the answer is "no" as long as there is no bypass for the prompt. Even if you could set such a limit, there would still be no way to filter the training material, as that step is long in the past.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »