Search (advanced search) | ||||
Use this Search form before posting, asking or make a new thread.
|
01-03-2014, 11:10 AM
Post: #1
|
|||
|
|||
[GET] Portable Screaming Frog SEO Spider 2.11
Portable Screaming Frog SEO Spider 2.11 | 76.43Mb
Especially designed both for SEO and serves as a useful tool to analyze websites. The principle of work it like a simple spider, which run in the shell program, he goes to the site and collects information, we can then analyze the received data. Any site, even a popular and successful, you need to check for duplicates pages, the presence of broken links, and other errors impeding. If the site is not large, this can be done manually. What if the website has over hundreds of pages? Today I want to talk about a very useful program for SEO - audit of a site. This is a program for those who are engaged in promotion of a site. The program is called Screaming Frog SEO Spider, which loosely translated means "Screaming SEO frog". The program is developed in the UK. Full version of the program is paid. Fee is charged in the amount of 99 pounds for the year to use the program. Lite version allows you to scan up to 500 pages of the site, and is free. For most sites, the free version is enough. Screaming Frog SEO Spider allows you to audit the site in terms of SEO, analyzes: title, description, image, css, link, extracts keywords and other SEO Spider will be indispensable for the analysis of medium and large sites, the pages which have to manually check a very labor - intensive process. Screaming Frog SEO Spider allows you to find everything you want in the source code of the web site, whether certain text or code, etc. Spider crawls specified sites similar to Google robot, are subject to the directives of the file robots.txt and it seems like the user-agent ?Screaming Frog SEO Spider'. Data analysis ?Flashy frogs?, which can be used as a basis for SEO recommendations, you can filter the way you need. What can this program? After starting the program for audit Screaming Frog SEO Spider simulates a search spider Google,is governed by the directives robot.txt. and gives all the data for a specific site. It gives descriptions, meta tags, keywords, images, links, external and internal. All of the test results, conveniently located in the respective tables. It also shows broken links and pages redirect. With the help of Screaming Frog SEO Spider, have the opportunity to create a Sitemap. The program allows you to export the received data in Excel. With export you can specially sorted information that displays in the shell program: The program also can create an xml site map, check robots.txt can analyze a site based on the current ??????? or ignoring it, you can customize the appearance of the robot, which hold the index of the site etc.. and indeed in the work program to be quite handy. What information Spider SEO brings in the report? * Error - the Client and the server (No answer, 4XX, and 5XX) * Redirect (3XX, permanent or temporary) * Content - Type content. * Title - the Title of the page. Page Titles (?????? pages) - a very useful tab with the Title pages and information about them. Here you can find: Pages, where there are no titles. Page with the same title. ?????? longer than 70 characters. Until recently, the common length of the title (Google) 60-70 characters. But it seems that now the title is not measured in characters, in pixels. The page where ?????? coincide with H1 headings. * Meta Description Tag, Meta Description number of no more than 156 characters. Meta Description, meta-description - information about the meta tag Description, length, list of pages with duplicate or missing meta descriptions. * Hash checksum value to search for pages with the takes. * Meta Keywords - Keywords. * H1 content of an H1 tag on the page, the size no more than 70 characters. * H2 - contents H2 tag on the page, the size no more than 70 characters. * Tags - , , , . Tags hypertext markup: Index, nosnippet, noodp, noydir etc. * Inlinks - the Number of incoming links to a page. * Outlinks - All pages, where there are external links. * XML Sitemap Generator - You can create an XML sitemap. * Meta Refresh (including target page and time delay) * Canonical Attribute * Weight of the page * Depth of the page * Internal links * External links, etc. Working with the program is easy. Enter the URL of the site in the "Enter url to spider" and press the button "Start". Next, wait for the operation of the program, and analyze the data. In the first tab "Internal" is a website. Here are all of the pages found. If you highlight the address string in the bottom table you can view all the information on this page. Internal Address - the Address (URL) of the site pages; Content - the content Type (Text/Html, image/jpeg, text/css, and so on), as well as the coding on this page (such as utf8); Status code - code of the web server response (for example, 200, 301, 404, etc.); Status - Status of the web server response (for example, for a response code of 200 status will be OK, as for the response code 301 status will be Moved Permanently); Title 1 - Title - title of the page; Title 1 Length - the Length of the title (title) in characters; Meta Description 1 - page Description (Meta tag Description); Meta Description Length Is The Length Of content Meta Description in characters; Meta Keyword 1 - content of the Meta tag Keyword; Meta Keywords Length Is the Length of the contents of the Meta Keywords in characters; h1 - 1 - Contents of the first on the page (in fact, the tags can be more - so soft if you have a second h1 displays a column h1-2 independently); h1 - Len-1 - the Length of the content of the tag in characters; h2 - 1 - Contents of the first on the page; h2 - Len-1 - the Length of the content of the tag in characters; Meta Data 1 - robots Meta data (that is, if the page is registered, for example , the software will display in the column Meta Data 1 value noindex, follow); Meta Refresh 1 (not found what it is - if you know to write in the comments); Canonical - the Preferred URL contents of the . Size - the Size of the page in bytes. If you want in Kilobytes (KB) divide by 1024; Level of nesting of the page or the number of clicks to do, starting from the main page of the site, to get to this page; Inlinks - Number of inbound internal links on the page; Outlinks - the Number of outbound internal links from this page; External Outlinks - the Number of external outbound links (absolutely all, including nofollow); Hash - HASH Value of the page. This is a great test of duplicate content. If 2 HASH - functions of the various pages are equal, and the content on these two pages will be the same. Screaming Frog SEO Spider collects primarily of Title 1, Meta Description 1, Meta Keyword 1, h1 - 1, h2 - 1, and so on (that is, the contents of the first data tags within the HTML code of the page designated as the index of 1), but if the page has a Title 2, Meta Description 2, Meta Keyword 2, h1 - 2, h2 - 2 and so on, the report will be automatically created and columns under these values. The second tab "External" gives the possibility to view all the links that come from your site to other websites. Allocate we are interested in the link, and the bottom table scan it for details. External: Contains the following data about external links on the website: Address - the Address (URL) of the external links on the website; Content - Type links (Text/Html, image/gif, application/x - javascript, and so on), as well as the coding on this page (such as utf8); Status code - code of the web server response (for example, 200, 301, 404, etc.); Status - Status of the web server response (for example, to code web server response 200 - status would be OK, but for the response code 302 status will be Found); Level of nesting of the page or the number of clicks to do, starting from the main page of the site, to get to this page; Inlinks - Number of external links in scale of the whole site. Response Code. Contains information about the following locations (wildcards): shows the HTTP headers pages. shows the error: 5XX, 4XX, 3XX, 200. Address - the Address (URL) of all pages, as well as all outgoing links; Content - the content Type (Text/Html, image/png, image/gif, and so on), as well as the coding on this page (for example, utf-8); Status code - code of the web server response (for example, 200, 301, 302, etc); Status - Status of the web server response (for example, to code web server response 200 - status will be OK, 301 - Moved Permanently, and for the response code 302 status will be Found); Redirect URI - this column contains the url purpose of redirecting. Type of redirect (301, 302, and so forth) look in the Status column of Code. n the tab URL collected all the pages with the problem, or addresses longer 115 characters. it contains the problematic site address: With characters outside of the ASCII. With lower underscores (this, of course, is not considered a violation of the PS, but still use dashes ?-? in the addresses are preferable as they share the word, and the lower underscore - no) With capital letters (type address site.ru/primer and site.ru/Primer considered takes) Duplicate pages Dynamic addresses - they are not friendly and create duplicate content. Addresses of pages or more in length 115 characters - the shorter the better and clearer in links (inbound links). Near the bottom. Select the url in the main window, click in links and get the following data: Type - the Type of reference (HREF, JS, CSS, IMG); From - Referencing URLs on the page; To Link selected in the main window; Anchor Text - the Text of the links (anchor text links); Alt Text content of the image; Follow the link attribute (if Follow - true, that is, it means that the link does not contain the attribute rel=?nofollow?) and (if Follow - false, that is, it means that the link contains the attribute rel=?nofollow?). Tab out links (external links). Near the bottom. Select the url in the main window, click out links and receive data similar in links with the only difference that the From - link already selected in the main window and To - Referencing URLs on the page. Tab "Meta Description" provides detailed information meta description page, the description, and the page doubles the Meta Description. A very important indicator for promotion. Search engines still index the Meta Description, and Google adds it when issuing a snippet. The "Images" tab shows the number of pictures is located on the website, the weight of these images and the alternative text that is visible to visitors, when the site loads slowly. If the data you want to save, press the button Advanced Export and choose what you want to keep. Able to filter only the necessary information. The advantages to note the following features: very fast s; data are presented in a convenient form, placed under the relevant tabs; the ability to configure manual filters necessary data Code: http://dizzcloud.com/dl/164era8/j5bmt.Portable.Screaming.Frog.SEO.Spider.2.11.rar |
|||