Just saw a post in my timeline about Scrapling, a Python scraping framework that promises to continue working when websites change, by adapting your selectors to the new website patterns. So if the website changes something about the button you want to click after you build the automation, it finds the new button without you needing to change the code.
After working many years with web scraping and fixing code every time they change, this got me curious. How do they find the updated component without a human action? So I cloned the repository and spawned a few research agents to find out.
When the framework matches the component you indicated, they save in a database a fingerprint of that component: full path, name, class, id, text, and other properties.
When the page updates and your component cannot be found anymore, they parse the entire page document and match those properties, returning the closest component that matches your old spec, updating it. It uses Python SequenceMatcher for that.
For slow moving sites, this is a clever idea. Say they placed the same button somewhere else in the page, something like Scrapy will break, but this will find the new button as the text, CSS, and id did not change.
Definitely something I would port to my projects if I was still working with this.