Before now, search engines spiders used basically meta data information to determine website presence and rank it accordingly. As a result, website developers ensured that the meta tags were heavily stuffed with bloated keywords related or not. Search engines are now more sophisticated using algorithms based on the entire webpage content and not just the meta data.The relevance of meta data today is primarily to validate the relativity to content and site position.
Search engines crawls through billions of website on the internet and builds an index listing on it web servers. When a search is performed, the search engine crawls through the index and returns relevant answers to the query. The alternative method for search engine to find website is the quality of the page content. A high-quality content, what other user are saying about the website and how popular the website is (trend) are all alternative for search engines to find website. The reliability of an author can also be used to find website.
Nevertheless, the rest of the meta data may not be very important to search engine spiders as it were some years back, but the Title, Description and Keywords tags still shows up when a website is found. According to SurveyMonkey, it was discovered that 43.2% of people click on a given result due to well-articulated meta description. The description which can be termed website summary increases click through rate of a visitor to any website.
The clickable link information of a website is taken from the title tag. It also determines if a user will click a page or not even after it is discovered by the search engine algorithm.
In my conclusion, HTML meta data may not be important to search engines in finding a website, it is still relevant to visitors click through rate from the search result pool.