Why is the new station not included after more than one month?
Source: Shangpin China |
Type: website encyclopedia |
Time: March 28, 2014
A friend said that his website has been online for a month, and he insists on an original article every day, but it has not been included. There was no reward for the effort, so I diagnosed the website and wrote the first website diagnosis article after the blog changed the domain name. I have been watching for 2 hours every day recently SEO website optimization A summary of the book.
[My enterprise station has been online for a month, and insists on at least one original article every day. Baidu still doesn't include it, so I don't know what to do.]
After reading the website, you can refer to the following solutions to operate. It will be easier to solve Website construction Do not include questions.
[ZAC mentioned in the "SEO Practical Password" that the website's inclusion rate is low, and SEOER personnel need to adjust the website structure and increase the construction of external links. The website structure means that spiders can more smoothly crawl the content of each page of the website after entering the website. The role of the website's external chain is to provide spiders with access to crawl the website].
1. Problem with robot.txt setting:
The website does not set the robot.txt file, which can directly exclude the marketing website's inclusion and robot.txt setting errors. It is recommended to set the robot.txt of the website. I don't understand Baidu Encyclopedia.
2. Website structure:
After looking at the architecture of the website, the page design adopts table layout and code redundancy. It is not as friendly to spider crawling as div+css, but it does not mean that website spiders cannot crawl websites. However, the URL of the website has not undergone pseudo static processing, which further increases the difficulty for spiders to crawl the website (in addition to pseudo static: website settings such as preferred domain, 301404 pages, website map, etc., which are recommended to be improved). There are some problems in the website structure. But it does not represent the quality problem of search engine websites: this is an important reason why I think spiders do not crawl websites.
(1) The whole page layout and user experience are relatively poor, and the page quality is not high
(2) Although the article is original (not tested, believed to be original), it has few words.
Suggestion: The number of updated words of the article should be about 500 words, and the article should be presented in a combination of pictures and text.
(3) Self defeating SEO:
In addition to the small number of words updated, it is difficult for search engines to trust such articles. Every article is piled with keywords and URLs. At first sight, it belongs to over optimization. There is no anchor text link in the article. The weight cannot be transferred reasonably. Even if the website is included for some time, it will take a longer time to rank keywords on the home page.
3. External chain problems:
It is necessary to increase the path for spiders to crawl websites, and make more high-quality external chains.
After reading the external chain of the website, I posted several articles on the home appliance maintenance forum in addition to the forum signature. Basically, there is no other platform external chain. In addition to the limited access provided to spiders, too many forum signatures are easy to be punished by search engines, although Baidu did not explicitly point out that the forum chain cannot be done. However, industry SEO experts, including zac, have mentioned that the forum external chain is a kind of garbage external chain, and it is unintentional to do more. Suggestion: Use Baidu webmaster tool to reject the external chain of low-quality websites.
4. The website server is stable:
Frequent downtime of websites will affect the inability of spiders to crawl websites.
For such a website, I have reason to believe that the space cost is also very low. The low-cost virtual space used is also what we often call garbage space.
Suggestion: Use the monitoring report to monitor the stability of the website space, and check whether the website server often fails to open.
5. Links:
New site export links should not be too many.
The weight of the new site is originally low. There are too many exported links, and the weight is more dispersed. It is not recommended to exchange friendship before the website is included. Because SEO deliberately makes obvious traces. After the website is included, it is not recommended to exchange too many friendship links. The weight of the new site is inherently low, there are too many export links, and the authority value of the other site is not high. It is not a good thing that your website has no weight and the hub guides too much.
Suggestion: even if the exchange of friendship links depends on the weight value of the other party's website, it also depends on the number of exported links of the other party. (The authoritative value and hub value of the website are derived from the HITS algorithm. Those who do not understand or are interested in it can go to Baidu.)
6. Domain name history:
The domain name has been used before and may be affected to some extent.
Through the domain name history query, the domain name has been recorded in 2007 and 2013. If the domain name has ever had a bad record. If you re enable it now, there will be a period of evaluation. Generally, this factor is relatively small, but it cannot be excluded.
7. Website log analysis:
Use the light year log analysis tool to see the original records of spider crawling websites.
If the first five points are personal experience and speculation, then it is reasonable to analyze the website logs. The website does not include them. Look at the situation that spiders crawl every day. If spiders crawl the website columns every day, but they have not included them, for one thing, the website is still in the observation period. Recently, many websites have encountered similar problems, and the inclusion is very slow, Since there are spiders to crawl, it means that the search engine already knows this website, but it is not included for the time being. Another situation is that the website is likely to enter the sandbox. Then several performance points of sandbox need to be dealt with. Check whether the website conforms to the above conditions, and change the unreasonable ones.
[Why does the website enter the Google sandbox?]
1. One is the old website, which suddenly adds a large number of imported links.
2. One is new websites, especially new websites, which suddenly get a lot of imported links.
3. Deliberately point a large number of pages in the site to a page you want to optimize.
4. Import a large number of spam links or send a large number of spam out of the chain in a short time.
5. The optimization of the website for the search engine is excessive.
If the above six points can be done well, it is a matter of time before the website is included. Website optimization is easy to get started, but it is difficult to do well. You need to pay attention. Don't make rankings at the expense of user experience. The final results will only disappoint you and pay poorly.
Source Statement: This article is original or edited by Shangpin China's editors. If it needs to be reproduced, please indicate that it is from Shangpin China. The above contents (including pictures and words) are from the Internet. If there is any infringement, please contact us in time (010-60259772).