By default, robots.txt in XMLUI allows indexing all content. This leads to indexing all browse, search and discovery pages. Search engines then give mostly results pointing to these lists of results instead of the proper items. I suggest to disallow the following pages by default:
Note, that current robots.txt contains this message:
- Uncomment the following line ONLY if sitemaps.org or HTML sitemaps are used
- and you have verified that your site is being indexed correctly.
- Disallow: /browse
Since all items should be accessible via the browse pages in the community/collection structure, /browse pages should be allowed by default to enable spiders to explore the whole repository. But /discover and /search-filter are surely redundant and only clutter the search results.