I’ve always been kind of curious, why their wasn’t an OSS possibility to “download” chunks of aggregated search content.
I know that technically it would be a challenge, but forcing crawler after crawler to fetch the exact same content (again and again), is also rather inefficient.
Thanks for sharing links to this project.
I’ve always been kind of curious, why their wasn’t an OSS possibility to “download” chunks of aggregated search content.
I know that technically it would be a challenge, but forcing crawler after crawler to fetch the exact same content (again and again), is also rather inefficient.