Wayback Machine

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Wayback Machine
Stylized text saying: "INTERNET ARCHIVE WAYBACK MACHINE". The text is in black, except for "WAYBACK", which is in red.
Web address <templatestyles src="Plainlist/styles.css"/>
Commercial? No
Type of site
Archive
Registration Optional
Written in HTML, CSS, JavaScript, Java, Python
Owner Internet Archive
Current status Temporarily offline Until Monday October 14th
^ a: Although it is formally blocked, enforcement is not consistent and depends on the region.[1]

The Wayback Machine is a digital archive of the World Wide Web founded by the Internet Archive, an American nonprofit organization based in San Francisco, California. Created in 1996 and launched to the public in 2001, it allows users to go "back in time" to see how websites looked in the past. Its founders, Brewster Kahle and Bruce Gilliat, developed the Wayback Machine to provide "universal access to all knowledge" by preserving archived copies of defunct web pages.[2]

Launched on May 10, 1996, the Wayback Machine had saved more than 38.2 billion web pages at the end of 2009. As of January 3, 2024, the Wayback Machine has archived more than 860 billion web pages and well over 99 petabytes of data.[3][4]

History

The Wayback Machine began archiving cached web pages in 1996. One of the earliest known pages was archived on May 10, 1996, at (UTC).[5]

Internet Archive founders Brewster Kahle and Bruce Gilliat launched the Wayback Machine in San Francisco, California,[6] in October 2001,[7][8] primarily to address the problem of web content vanishing whenever it gets changed or when a website is shut down.[9] The service enables users to see archived versions of web pages across time, which the archive calls a "three-dimensional index".[10] Kahle and Gilliat created the machine hoping to archive the entire Internet and provide "universal access to all knowledge".[11] The name "Wayback Machine" is a reference to a fictional time-traveling device in the animated cartoon The Adventures of Rocky and Bullwinkle and Friends from the 1960s.[12][13][14] In a segment of the cartoon entitled "Peabody's Improbable History", the characters Mister Peabody and Sherman use the "Wayback Machine" to witness and participate in famous historical events.[citation needed]

From 1996 to 2001, the information was kept on digital tape, with Kahle occasionally allowing researchers and scientists to tap into the "clunky" database.[15] When the archive reached its fifth anniversary in 2001, it was unveiled and opened to the public in a ceremony at the University of California, Berkeley.[16] By the time the Wayback Machine launched, it already contained over 10 billion archived pages.[17] The data is stored on the Internet Archive's large cluster of Linux nodes.[11] It revisits and archives new versions of websites on occasion (see technical details below).[18] Sites can also be captured manually by entering a website's URL into the search box, provided that the website allows the Wayback Machine to "crawl" it and save the data.[19]

Recent event history

Date Event description
2020-10-30 The Wayback Machine began fact-checking content.[20]
2022-01 From this date, domains of ad servers are disabled from capturing.[21]
2021-05 On the occasion of the Internet Archive's 25th anniversary, the Wayback Machine introduced the "Wayforward Machine" which allows users to "travel to the Internet in 2046, where knowledge is under siege".[22][23]
2024-10-11 The Wayback Machine was taken offline for examination and system security upgrades (expected to be for just a few days).[24]

Technical information

The Wayback Machine's software has been developed to "crawl" the Web and download all publicly accessible information and data files on webpages, the Gopher hierarchy, the Netnews (Usenet) bulletin board system, and downloadable software.[25] The information collected by these "crawlers" does not include all the information available on the Internet, since much of the data is restricted by the publisher or stored in databases that are not accessible. To overcome inconsistencies in partially cached websites, Archive-It.org was developed in 2005 by the Internet Archive as a means of allowing institutions and content creators to voluntarily harvest and preserve collections of digital content, and create digital archives.[26]

Crawls are contributed from various sources, some imported from third parties and others generated internally by the Archive.[18] For example, crawls are contributed by the Sloan Foundation and Alexa, crawls run by Internet Archive on behalf of NARA and the Internet Memory Foundation, mirrors of Common Crawl.[18] The "Worldwide Web Crawls" have been running since 2010 and capture the global Web.[18][27]

Documents and resources are stored with time stamp URLs such as 20241115022527. Pages' individual resources such as images and style sheets and scripts, as well as outgoing hyperlinks, are linked to with the time stamp of the currently viewed page, so they are redirected automatically to their individual captures that are the closest in time.[28]

The frequency of snapshot captures varies per website.[18] Websites in the "Worldwide Web Crawls" are included in a "crawl list", with the site archived once per crawl.[18] A crawl can take months or even years to complete, depending on size.[18] For example, "Wide Crawl Number 13" started on January 9, 2015, and completed on July 11, 2016.[29] However, there may be multiple crawls ongoing at any one time, and a site might be included in more than one crawl list, so how often a site is crawled varies widely.[18]

Starting in October 2019, users are limited to 15 archival requests and retrievals per minute.[30][why?]

Storage capacity and growth

As technology has developed over the years, the storage capacity of the Wayback Machine has grown. In 2003, after only two years of public access, the Wayback Machine was growing at a rate of 12 terabytes per month. The data is stored on PetaBox rack systems custom designed by Internet Archive staff. The first 100TB rack became fully operational in June 2004, although it soon became clear that they would need much more storage than that.[31][32]

The Internet Archive migrated its customized storage architecture to Sun Open Storage in 2009, and hosts a new data centre in a Sun Modular Datacenter on Sun Microsystems' California campus.[33] As of 2009, the Wayback Machine contained approximately three petabytes of data and was growing at a rate of 100 terabytes each month.[34]

A new, improved version of the Wayback Machine, with an updated interface and a fresher index of archived content, was made available for public testing in 2011, where captures appear in a calendar layout with circles whose width visualizes the number of crawls each day, but no marking of duplicates with asterisks or an advanced search page.[35][36] A top toolbar was added to facilitate navigating between captures. A bar chart visualizes the frequency of captures per month over the years.[37] Features like "Changes", "Summary", and a graphical site map were added subsequently.

In March that year, it was said on the Wayback Machine forum that "the Beta of the new Wayback Machine has a more complete and up-to-date index of all crawled materials into 2010, and will continue to be updated regularly. The index driving the classic Wayback Machine only has a little bit of material past 2008, and no further index updates are planned, as it will be phased out this year."[38] Also in 2011, the Internet Archive installed their sixth pair of PetaBox racks which increased the Wayback Machine's storage capacity by 700 terabytes.[39]

In January 2013, the company announced a ground-breaking milestone of 240 billion URLs.[40]

In October 2013, the company introduced the "Save a Page" feature[41][42] which allows any Internet user to archive the contents of a URL, and quickly generates a permanent link unlike the preceding liveweb feature.

In December 2014, the Wayback Machine contained 435 billion web pages—almost nine petabytes of data, and was growing at about 20 terabytes a week.[17][43][44]

In July 2016, the Wayback Machine reportedly contained around 15 petabytes of data.[45]

In September 2018, the Wayback Machine contained over 25 petabytes of data.[46][47]

As of December 2020, the Wayback Machine contained over 70 petabytes of data.[48]

The Internet Archive, as of January 2024, attests to have stored well over 99 petabytes of data so far.[3][4]

Wayback Machine growth[49][50]
Wayback Machine by year Pages archived
2004
30,000,000,000(0-100B : Light blue)
2005
40,000,000,000
2008
85,000,000,000
2012
150,000,000,000(100B-450B : Yellow)
2013
373,000,000,000
2014
400,000,000,000
2015
452,000,000,000(450B-600B : Orange)
2016
459,000,000,000
2017
279,000,000,000
2018
310,000,000,000
2019
345,000,000,000
2020
405,000,000,000
2021
514,000,000,000
2022
640,000,000,000(600B- : Red)
2024
866,000,000,000

Wayback Machine APIs

The Wayback Machine service offers three public APIs, SavePageNow, Availability, and CDX.[51] SavePageNow can be used to archive web pages. Availability API for checking the archive availability status for a web page,[52] checking whether an archive for the web page exists or not. CDX API is for complex querying, filtering, and analysis of captured data.[53][54]

Website exclusion policy

Historically, the Wayback Machine has respected the robots exclusion standard (robots.txt) in determining if a website would be crawled – or if already crawled, if its archives would be publicly viewable. Website owners had the option to opt-out of Wayback Machine through the use of robots.txt. It applied robots.txt rules retroactively; if a site blocked the Internet Archive, any previously archived pages from the domain were immediately rendered unavailable as well. In addition, the Internet Archive stated that "Sometimes, a website owner will contact us directly and ask us to stop crawling or archiving a site. We comply with these requests."[55] In addition, the website says: "The Internet Archive is not interested in preserving or offering access to Web sites or other internet documents of persons who do not want their materials in the collection."[56][57]

On April 17, 2017, reports surfaced of sites that had gone defunct and became parked domains that were using robots.txt to exclude themselves from search engines, resulting in them being inadvertently excluded from the Wayback Machine.[58] Following this, the Internet Archive changed the policy to require an explicit exclusion request to remove it from the Wayback Machine.[28]

Oakland archive policy

Wayback's retroactive exclusion policy is based in part upon Recommendations for Managing Removal Requests and Preserving Archival Integrity published by the School of Information Management and Systems at University of California, Berkeley in 2002, which gives a website owner the right to block access to the site's archives.[59] Wayback has complied with this policy to help avoid expensive litigation.[60]

The Wayback retroactive exclusion policy began to relax in 2017, when it stopped honoring robots on U.S. government and military web sites for both crawling and displaying web pages. As of April 2017, Wayback is ignoring robots.txt more broadly, not just for U.S. government websites.[61][62][63][64]

Uses

From its public launch in 2001, the Wayback Machine has been studied by scholars both for the ways it stores and collects data as well as for the actual pages contained in its archive. As of 2013, scholars had written about 350 articles on the Wayback Machine, mostly from the information technology, library science, and social science fields. Social science scholars have used the Wayback Machine to analyze how the development of websites from the mid-1990s to the present has affected the company's growth.[17]

When the Wayback Machine archives a page, it usually includes most of the hyperlinks, keeping those links active when they just as easily could have been broken by the Internet's instability. Researchers in India studied the effectiveness of the Wayback Machine's ability to save hyperlinks in online scholarly publications and found that it saved slightly more than half of them.[65]

"Journalists use the Wayback Machine to view dead websites, dated news reports, and changes to website contents. Its content has been used to hold politicians accountable and expose battlefield lies."[66] In 2014, an archived social media page of Igor Girkin, a separatist rebel leader in Ukraine, showed him boasting about his troops having shot down a suspected Ukrainian military airplane before it became known that the plane actually was a civilian Malaysian Airlines jet (Malaysia Airlines Flight 17), after which he deleted the post and blamed Ukraine's military for downing the plane.[66][67] In 2017, the March for Science originated from a discussion on Reddit that indicated someone had visited Archive.org and discovered that all references to climate change had been deleted from the White House website. In response, a user commented, "There needs to be a Scientists' March on Washington".[68][69][70]

Furthermore, the site is used heavily for verification, providing access to references and content creation by Wikipedia editors.[71] When new URLs are added to Wikipedia, the Internet Archive has been archiving them.[71]:{{{3}}}

In September 2020, a partnership was announced with Cloudflare to automatically archive websites served via its "Always Online" service, which will also allow it to direct users to its copy of the site if it cannot reach the original host.[72]

Limitations

In 2014, there was a six-month lag time between when a website was crawled and when it became available for viewing in the Wayback Machine.[73] As of 2024, the lag time is 3 to 10 hours.[28] The Wayback Machine offers only limited search facilities. Its "Site Search" feature allows users to find a site based on words describing the site, rather than words found on the web pages themselves.[74]

The Wayback Machine does not include every web page ever made due to the limitations of its web crawler. The Wayback Machine cannot completely archive web pages that contain interactive features such as Flash platforms and forms written in JavaScript and progressive web applications, because those functions require interaction with the host website. This means that, since approximately July 9, 2013, the Wayback Machine has been unable to display YouTube comments when saving videos' watch pages, as, according to the Archive Team, comments are no longer "loaded within the page itself."[75] The Wayback Machine's web crawler has difficulty extracting anything not coded in HTML or one of its variants, which can often result in broken hyperlinks and missing images. Due to this, the web crawler cannot archive "orphan pages" that are not linked to by other pages.[74][76] The Wayback Machine's crawler only follows a predetermined number of hyperlinks based on a preset depth limit, so it cannot archive every hyperlink on every page.[27]

In legal evidence

Civil litigation

Netbula LLC v. Chordiant Software Inc.

In a 2009 case, Netbula, LLC v. Chordiant Software Inc., defendant Chordiant filed a motion to compel Netbula to disable the robots.txt file on its website that was causing the Wayback Machine to retroactively remove access to previous versions of pages it had archived from Netbula's site, pages that Chordiant believed would support its case.[77]

Netbula objected to the motion on the ground that defendants were asking to alter Netbula's website and that they should have subpoenaed Internet Archive for the pages directly.[78] An employee of Internet Archive filed a sworn statement supporting Chordiant's motion, however, stating that it could not produce the web pages by any other means "without considerable burden, expense and disruption to its operations."[77]

Magistrate Judge Howard Lloyd in the Northern District of California, San Jose Division, rejected Netbula's arguments and ordered them to disable the robots.txt blockage temporarily in order to allow Chordiant to retrieve the archived pages that they sought.[77]

Telewizja Polska USA, Inc. v. Echostar Satellite

In an October 2004 case, Telewizja Polska USA, Inc. v. Echostar Satellite, No. 02 C 3293, 65 Fed. R. Evid. Serv. 673 (N.D. Ill. October 15, 2004), a litigant attempted to use the Wayback Machine archives as a source of admissible evidence, perhaps for the first time. Telewizja Polska is the provider of TVP Polonia and EchoStar operates the Dish Network. Prior to the trial proceedings, EchoStar indicated that it intended to offer Wayback Machine snapshots as proof of the past content of Telewizja Polska's website. Telewizja Polska brought a motion in limine to suppress the snapshots on the grounds of hearsay and unauthenticated source, but Magistrate Judge Arlander Keys rejected Telewizja Polska's assertion of hearsay and denied TVP's motion in limine to exclude the evidence at trial.[79][80] At the trial, however, District Court Judge Ronald Guzman, the trial judge, overruled Magistrate Keys' findings, and held that neither the affidavit of the Internet Archive employee nor the underlying pages (i.e., the Telewizja Polska website) were admissible as evidence. Judge Guzman reasoned that the employee's affidavit contained both hearsay and inconclusive supporting statements, and the purported web page, printouts were not self-authenticating.[81][82]

Patent law

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

The United States Patent and Trademark Office and the European Patent Office will accept date stamps from the Internet Archive as evidence of when a given Web page was accessible to the public. These dates are used to determine if a Web page is available as prior art for instance in examining a patent application.[83]

Limitations of utility

There are technical limitations to archiving a website, and as a consequence, opposing parties in litigation can misuse the results provided by website archives. This problem can be exacerbated by the practice of submitting screenshots of web pages in complaints, answers, or expert witness reports when the underlying links are not exposed and therefore, can contain errors. For example, archives such as the Wayback Machine do not fill out forms and therefore, do not include the contents of non-RESTful e-commerce databases in their archives.[84]

Legal status

In Europe, the Wayback Machine could be interpreted as violating copyright laws. Only the content creator can decide where their content is published or duplicated so the Archive would have to delete pages from its system upon request of the creator.[85] The exclusion policies for the Wayback Machine may be found in the FAQ section of the site.[86]

Some cases have been brought against the Internet Archive specifically for its Wayback Machine archiving efforts.

Archived content legal issues

Scientology

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

In late 2002, the Internet Archive removed various sites that were critical of Scientology from the Wayback Machine.[87] An error message stated that this was in response to a "request by the site owner".[88] Later, it was clarified that lawyers from the Church of Scientology had demanded the removal and that the site owners did not want their material removed.[89]

Healthcare Advocates, Inc.

In 2003, Harding Earley Follmer & Frailey defended a client from a trademark dispute using the Archive's Wayback Machine. The attorneys were able to demonstrate that the claims made by the plaintiff were invalid, based on the content of their website from several years prior. The plaintiff, Healthcare Advocates, then amended their complaint to include the Internet Archive, accusing the organization of copyright infringement as well as violations of the DMCA and the Computer Fraud and Abuse Act. Healthcare Advocates claimed that, since they had installed a robots.txt file on their website, even if after the initial lawsuit was filed, the Archive should have removed all previous copies of the plaintiff website from the Wayback Machine, however, some material continued to be publicly visible on Wayback.[90] The lawsuit was settled out of court after Wayback fixed the problem.[91]

Suzanne Shell

Activist Suzanne Shell filed suit in December 2005, demanding Internet Archive pay her US$100,000 for archiving her website profane-justice.org between 1999 and 2004.[92][93] Internet Archive filed a declaratory judgment action in the United States District Court for the Northern District of California on January 20, 2006, seeking a judicial determination that Internet Archive did not violate Shell's copyright. Shell responded and brought a countersuit against Internet Archive for archiving her site, which she alleges is in violation of her terms of service.[94] On February 13, 2007, a judge for the United States District Court for the District of Colorado dismissed all counterclaims except breach of contract.[93] The Internet Archive did not move to dismiss copyright infringement claims Shell asserted arising out of its copying activities, which would also go forward.[95]

On April 25, 2007, Internet Archive and Suzanne Shell jointly announced the settlement of their lawsuit.[92] The Internet Archive said it "...has no interest in including materials in the Wayback Machine of persons who do not wish to have their Web content archived. We recognize that Ms. Shell has a valid and enforceable copyright in her Web site and we regret that the inclusion of her Web site in the Wayback Machine resulted in this litigation." Shell said, "I respect the historical value of Internet Archive's goal. I never intended to interfere with that goal nor cause it any harm."[96]

Daniel Davydiuk

Between 2013 and 2016, a pornographic actor named Daniel Davydiuk tried to remove archived images of himself from the Wayback Machine's archive, first by sending multiple DMCA requests to the archive, and then by appealing to the Federal Court of Canada.[97][98][99] The images were removed from the website in 2017.

FlexiSpy

In 2018, archives of stalkerware application FlexiSpy's website were removed from the Wayback Machine. The company claimed to have contacted the Internet Archive, presumably to remove the archives of its website.[100]

Censorship and other threats

Archive.org is blocked in China.[101][102][103] The Internet Archive was blocked in its entirety in Russia in 2015–16, ostensibly for hosting a Jihad outreach video.[66][104][105] Since 2016, the website has been back, available in its entirety, although in 2016 Russian commercial lobbyists were suing the Internet Archive to ban it on copyright grounds.[106]

In March 2015, it was published that security researchers became aware of the threat posed by the service's unintentional hosting of malicious binaries from archived sites.[107][108]

Alison Macrina, director of the Library Freedom Project, notes that "while librarians deeply value individual privacy, we also strongly oppose censorship".[66]

There is at least one case in which an article was removed from the archive shortly after it had been removed from its original website. A Daily Beast reporter had written an article that outed several gay Olympian athletes in 2016 after he had made a fake profile posing as a gay man on a dating app. The Daily Beast removed the article after it was met with widespread furor; not long after, the Internet Archive soon did as well, but emphatically stated that they did so for no other reason than to protect the safety of the outed athletes.[66]

Other threats include natural disasters,[109] destruction (both remote and physical),[110] manipulation of the archive's contents, problematic copyright laws,[111] and surveillance of the site's users.[112]

Alexander Rose, executive director of the Long Now Foundation, suspects that in the long term of multiple generations "next to nothing" will survive in a useful way, stating, "If we have continuity in our technological civilization, I suspect a lot of the bare data will remain findable and searchable. But I suspect almost nothing of the format in which it was delivered will be recognizable" because sites "with deep back-ends of content-management systems like Drupal and Ruby and Django" are harder to archive.[113]

In 2016, in an article reflecting on the preservation of human knowledge, The Atlantic has commented that the Internet Archive, which describes itself to be built for the long-term,[114] "is working furiously to capture data before it disappears without any long-term infrastructure to speak of."[115]

In September 2024, the Internet Archive suffered a data breach that exposed 31 million records containing personal information, including email addresses and passwords. On October 9, 2024, the site went down due to a distributed denial-of-service attack.[116]

See also

<templatestyles src="Div col/styles.css"/>

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 Lua error in package.lua at line 80: module 'strict' not found. The current number of archived pages can be seen at the archive's home page.
  4. 4.0 4.1 Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. 11.0 11.1 Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Keith Scott (2000). The Moose that Roared: The Story of Jay Ward, Bill Scott, a Flying Squirrel, and a Talking Moose. St. Martin's Press. ISBN 0-312-19922-8
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. 17.0 17.1 17.2 Lua error in package.lua at line 80: module 'strict' not found.
  18. 18.0 18.1 18.2 18.3 18.4 18.5 18.6 18.7 Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. Attempts to 'save page now' domains such as tpc.googlesyndication.com or s0.2mdn.net or atdmt.com or adbrite.com result in "This URL is in our block list and cannot be captured."
  22. Lua error in package.lua at line 80: module 'strict' not found.
  23. Lua error in package.lua at line 80: module 'strict' not found.
  24. Lua error in package.lua at line 80: module 'strict' not found.
  25. Lua error in package.lua at line 80: module 'strict' not found.
  26. Lua error in package.lua at line 80: module 'strict' not found.
  27. 27.0 27.1 Lua error in package.lua at line 80: module 'strict' not found.
  28. 28.0 28.1 28.2 Lua error in package.lua at line 80: module 'strict' not found.
  29. Lua error in package.lua at line 80: module 'strict' not found.
  30. Lua error in package.lua at line 80: module 'strict' not found.
  31. Lua error in package.lua at line 80: module 'strict' not found.
  32. Lua error in package.lua at line 80: module 'strict' not found.
  33. Lua error in package.lua at line 80: module 'strict' not found.
  34. Lua error in package.lua at line 80: module 'strict' not found.
  35. Lua error in package.lua at line 80: module 'strict' not found.
  36. Lua error in package.lua at line 80: module 'strict' not found.
  37. Lua error in package.lua at line 80: module 'strict' not found.
  38. Lua error in package.lua at line 80: module 'strict' not found.
  39. Lua error in package.lua at line 80: module 'strict' not found.
  40. Lua error in package.lua at line 80: module 'strict' not found.
  41. Lua error in package.lua at line 80: module 'strict' not found.
  42. Lua error in package.lua at line 80: module 'strict' not found.
  43. Lua error in package.lua at line 80: module 'strict' not found.
  44. Lua error in package.lua at line 80: module 'strict' not found.
  45. Lua error in package.lua at line 80: module 'strict' not found.
  46. Lua error in package.lua at line 80: module 'strict' not found.
  47. Lua error in package.lua at line 80: module 'strict' not found.
  48. Lua error in package.lua at line 80: module 'strict' not found.
  49. Lua error in package.lua at line 80: module 'strict' not found.
  50. Lua error in package.lua at line 80: module 'strict' not found.
  51. Lua error in package.lua at line 80: module 'strict' not found.
  52. waybackpy on GitHub
  53. Lua error in package.lua at line 80: module 'strict' not found.
  54. Lua error in package.lua at line 80: module 'strict' not found.
  55. Lua error in package.lua at line 80: module 'strict' not found.
  56. Lua error in package.lua at line 80: module 'strict' not found.
  57. Lua error in package.lua at line 80: module 'strict' not found.
  58. Lua error in package.lua at line 80: module 'strict' not found.
  59. Lua error in package.lua at line 80: module 'strict' not found.
  60. Lua error in package.lua at line 80: module 'strict' not found.
  61. Lua error in package.lua at line 80: module 'strict' not found.
  62. Lua error in package.lua at line 80: module 'strict' not found.
  63. Lua error in package.lua at line 80: module 'strict' not found.
  64. Lua error in package.lua at line 80: module 'strict' not found.
  65. Lua error in package.lua at line 80: module 'strict' not found.
  66. 66.0 66.1 66.2 66.3 66.4 Lua error in package.lua at line 80: module 'strict' not found.
  67. Lua error in package.lua at line 80: module 'strict' not found.
  68. Lua error in package.lua at line 80: module 'strict' not found.
  69. Lua error in package.lua at line 80: module 'strict' not found.
  70. Lua error in package.lua at line 80: module 'strict' not found.
  71. 71.0 71.1 Lua error in package.lua at line 80: module 'strict' not found.
  72. Lua error in package.lua at line 80: module 'strict' not found.
  73. Lua error in package.lua at line 80: module 'strict' not found.
  74. 74.0 74.1 Lua error in package.lua at line 80: module 'strict' not found.
  75. Lua error in package.lua at line 80: module 'strict' not found.
  76. Lua error in package.lua at line 80: module 'strict' not found.
  77. 77.0 77.1 77.2 Lua error in package.lua at line 80: module 'strict' not found.
  78. Lua error in package.lua at line 80: module 'strict' not found.
  79. Lua error in package.lua at line 80: module 'strict' not found.
  80. Lua error in package.lua at line 80: module 'strict' not found.
  81. Lua error in package.lua at line 80: module 'strict' not found.
  82. Lua error in package.lua at line 80: module 'strict' not found.
  83. Lua error in package.lua at line 80: module 'strict' not found.
  84. Lua error in package.lua at line 80: module 'strict' not found.
  85. Lua error in package.lua at line 80: module 'strict' not found.
  86. Lua error in package.lua at line 80: module 'strict' not found.
  87. Lua error in package.lua at line 80: module 'strict' not found.
  88. Lua error in package.lua at line 80: module 'strict' not found. Author and Date indicate initiation of forum thread.
  89. Lua error in package.lua at line 80: module 'strict' not found.
  90. Lua error in package.lua at line 80: module 'strict' not found.
  91. Lua error in package.lua at line 80: module 'strict' not found.
  92. 92.0 92.1 Internet Archive v. Shell, 505 F.Supp.2d 755 at justia.com, 1:2006cv01726 (Colorado District Court August 31, 2006) (“'April 25, 2007 Settlement agreement announced.' Filing 65, 2007-04-30: '...therefore ORDERED that this matter shall be DISMISSED WITH PREJUDICE...'”).
  93. 93.0 93.1 Lua error in package.lua at line 80: module 'strict' not found.
  94. Lua error in package.lua at line 80: module 'strict' not found.
  95. Lua error in package.lua at line 80: module 'strict' not found.
  96. Lua error in package.lua at line 80: module 'strict' not found.
  97. Lua error in package.lua at line 80: module 'strict' not found.
  98. Lua error in package.lua at line 80: module 'strict' not found.
  99. Lua error in package.lua at line 80: module 'strict' not found.
  100. Lua error in package.lua at line 80: module 'strict' not found.
  101. Lua error in package.lua at line 80: module 'strict' not found.
  102. Lua error in package.lua at line 80: module 'strict' not found.
  103. Lua error in package.lua at line 80: module 'strict' not found.
  104. Lua error in package.lua at line 80: module 'strict' not found.
  105. Lua error in package.lua at line 80: module 'strict' not found.
  106. Lua error in package.lua at line 80: module 'strict' not found.
  107. Lua error in package.lua at line 80: module 'strict' not found.
  108. Lua error in package.lua at line 80: module 'strict' not found.
  109. Lua error in package.lua at line 80: module 'strict' not found.
  110. Lua error in package.lua at line 80: module 'strict' not found.
  111. Lua error in package.lua at line 80: module 'strict' not found.
  112. Lua error in package.lua at line 80: module 'strict' not found.
  113. Lua error in package.lua at line 80: module 'strict' not found.
  114. Lua error in package.lua at line 80: module 'strict' not found.
  115. Lua error in package.lua at line 80: module 'strict' not found.
  116. Lua error in package.lua at line 80: module 'strict' not found.

External links

  • No URL found. Please specify a URL here or add one to Wikidata.
  • Lua error in package.lua at line 80: module 'strict' not found.