Computers – A Technology Timeline

Computer: Definition

The term “computer” has its origins in the field of mathematics and was initially used to describe human individuals who performed calculations manually. The term itself predates the invention of electronic computers as we know them today.

In the early 17th century, the word “computer” emerged in English and was derived from the Latin word “computare,” meaning “to calculate” or “to reckon.” During this time, “computer” referred to humans, typically mathematicians or individuals skilled in arithmetic, who performed calculations by hand or using mechanical aids like abacuses or slide rules.

With the advent of mechanical calculating machines in the 19th century, the term “computer” began to be used to describe these devices as well. These machines, such as Charles Babbage’s Analytical Engine or the tabulating machines developed by Herman Hollerith, were designed to automate and facilitate mathematical computations.

However, it was in the mid-20th century, with the emergence of electronic digital computers, that the term “computer” came to be predominantly associated with these machines. Electronic computers, starting with devices like ENIAC (Electronic Numerical Integrator and Computer) and later the UNIVAC (Universal Automatic Computer), represented a significant leap forward in computing technology. They utilized electronic components to process and store data, providing much faster and more versatile computing capabilities than their mechanical counterparts.

As electronic computers became more prevalent and accessible, the term “computer” gradually shifted in usage from referring to human calculators to referring primarily to the machines themselves.

Over time, the term “computer” has became firmly associated with electronic devices capable of performing complex calculations, data processing, and other computational tasks.

Today, the term “computer” commonly refers to a wide range of devices, including personal computers, laptops, smartphones, tablets, and servers, among others, that employ electronic components to process and store information, perform computations, and execute software programs.

Computers: WWII and its Aftermath

During World War II, computers played a pivotal role in various military and scientific endeavors.

Here is a brief history of computers during World War II up to 1949:

Colossus: In 1943, the Colossus, a series of electronic computers, was developed by British codebreakers at Bletchley Park. The Colossus machines were used to decrypt encrypted messages sent by the German military, particularly the Lorenz cipher. This was a significant breakthrough in signals intelligence and helped the Allies gain valuable information during the war.

ENIAC: In the United States, the Electronic Numerical Integrator and Computer (ENIAC) was developed at the University of Pennsylvania between 1943 and 1945. ENIAC was the first general-purpose electronic digital computer and was primarily used for artillery trajectory calculations. It played a crucial role in the war effort by performing complex calculations quickly, aiding in the development of weapons and defense strategies.

Codebreaking and Cryptanalysis: Computers were employed in codebreaking and cryptanalysis efforts during the war. Alongside Colossus and ENIAC, other machines like the British Bombe and the American SIGABA played significant roles in deciphering enemy codes and ciphers, including the German Enigma machine. These machines helped decipher intercepted enemy communications, giving the Allies an advantage in intelligence gathering and military operations.

Harvard Mark series: The Harvard Mark computers, developed at Harvard University, were electromechanical machines used for scientific calculations and military applications during World War II. The Harvard Mark I, completed in 1944, was one of the first programmable computers. It was used for calculations related to the design of atomic bombs and other scientific and engineering calculations.

Manchester Mark 1: The Manchester Mark 1, developed at the University of Manchester in England, became operational in 1949. It was one of the earliest stored-program computers, allowing instructions and data to be stored in the same memory. The Manchester Mark 1 contributed to scientific research and calculations after the war.

Development of Computer Architecture: During World War II and its aftermath, significant advancements were made in computer architecture. Concepts such as stored-program architecture, binary arithmetic, and electronic components laid the foundation for the future development of computers.

The development and use of computers during World War II revolutionized cryptography, calculations, and scientific research. These early machines set the stage for further advancements in computing technology in the post-war period. The experiences gained during the war accelerated the progress of computer technology, leading to the subsequent growth and proliferation of computers in various fields.

Computers: 1950s

During the 1950s, computers were in their early stages of development and were quite different from the computers we are familiar with today.

Here is a description of real-world computers from the 1950s:

ENIAC (Electronic Numerical Integrator and Computer): Developed during World War II and completed in 1945, ENIAC was one of the earliest electronic general-purpose computers. It occupied a large room and used vacuum tubes for its logic and calculations. ENIAC was programmed by physically rewiring its circuits, making it a labor-intensive process.

UNIVAC I (UNIVersal Automatic Computer I): UNIVAC I, introduced in 1951, was the first commercially available computer in the United States. It used vacuum tubes and magnetic tape for data storage. UNIVAC I was primarily used for scientific and business applications and was notable for being the computer that predicted the outcome of the 1952 presidential election correctly.

IBM 650: Introduced in 1953, the IBM 650 was a popular computer during the 1950s. It used vacuum tubes and magnetic drum memory for data storage. The IBM 650 was designed for scientific and business calculations and was one of the first computers to be mass-produced.

IBM 704: Released in 1954, the IBM 704 was a significant advancement in computing technology. It used vacuum tubes and magnetic core memory for data storage. The IBM 704 was notable for its ability to handle scientific and engineering calculations and was widely used in research institutions and universities.

IBM 7090: Introduced in 1959, the IBM 7090 was a powerful computer that used transistors instead of vacuum tubes, which made it faster and more reliable. It featured magnetic core memory and was widely used in scientific and research applications.

These computers of the 1950s were large, room-sized machines that required specialized environments and extensive maintenance. They were primarily used for scientific calculations, military applications, and early business data processing. Programming was done using machine language or assembly language, which involved writing instructions directly in binary code or symbolic representations of machine instructions.

The Computers of the 1950s were a far cry from the compact and ubiquitous devices we have today. They represented the early stages of computer technology and set the foundation for the remarkable advancements that would follow in the coming decades.

Software – State of the Art: 1958

In 1958, the field of software was still in its early stages of development, and the concept of software as we understand it today was just beginning to take shape.

Here is an overview of the state of software in 1958:

Assembly Language: Most programming during this time was done using assembly language, which involved writing instructions in low-level machine code. Programming languages like FORTRAN and COBOL, which would later become widely used, were still in the early stages of development.

Limited Availability: Computers were large and expensive, primarily owned and operated by large corporations, government agencies, and research institutions. The availability of computers and access to programming resources were limited, leading to a relatively small community of programmers and software developers.

Manual Programming: Programming in the 1950s was a laborious and time-consuming process. Programmers had to write instructions directly in machine code, which involved understanding the computer’s architecture and memory organization. Programming errors were common, and debugging was a challenging task.

Punch Cards and Paper Tape: Input and output were typically done using punch cards or paper tape. Programmers prepared their code on punch cards or paper tape, which were then fed into the computer using card readers or tape readers. Output was often printed on paper.

Lack of Software Engineering Practices: The field of software engineering, as we know it today, did not yet exist. There were no standardized methodologies or best practices for software development. Documentation and version control practices were minimal, making it challenging to maintain and update software systems.

Limited Applications: Software applications were primarily focused on scientific and engineering calculations, as well as military and government applications. Business data processing, such as payroll and inventory management, was also starting to be explored, but the software for such applications was still in its early stages.

Lack of User-Friendly Interfaces: Computers were operated using command-line interfaces, and graphical user interfaces (GUIs) had not yet been developed. Interacting with computers required a deep understanding of the machine’s architecture and commands, making it accessible only to skilled technicians and programmers.

The state of software in 1958 was characterized by limited availability, manual programming processes, and a focus on scientific and engineering applications.

The software development practices and tools we take for granted today were yet to be developed, and the field was still in its infancy compared to the advancements that would follow in the coming decades.

Software availability was limited compared to the vast range of software options we have today. Computers at that time were primarily used for scientific, engineering, and military applications. Here are a few examples of software that were available during that period:

FORTRAN (Formula Translation): FORTRAN was one of the earliest high-level programming languages developed for scientific and engineering calculations. It allowed programmers to write complex mathematical formulas and equations more easily than in assembly language.

COBOL (Common Business-Oriented Language): COBOL was developed specifically for business data processing. It aimed to standardize and simplify the programming of business applications, such as payroll and inventory management.

Assembly Language Libraries: Assembly language libraries provided pre-written routines and subroutines for common tasks, such as mathematical operations, input/output handling, and memory management. These libraries allowed programmers to reuse code and save time.

Autocode: Autocode was an early high-level programming language developed in the late 1950s. It was designed to simplify programming tasks and improve code efficiency, primarily for scientific and mathematical calculations.

System Utilities: Various system utilities were available to assist with tasks such as managing computer resources, handling input/output operations, and performing system-level functions. These utilities were often specific to the hardware and operating systems of the particular computer systems in use.

It’s important to note that software development during this time was largely driven by specific hardware architectures, and software portability between different computer systems was limited. Additionally, the software available was typically custom-developed for specific applications or projects, and there were no standardized software packages or commercial software offerings like we have today.

The software landscape in 1958 was relatively limited compared to modern standards, reflecting the early stages of software development and the specialized nature of computer usage during that era.

Computers: 1960s

Computers in the 1960s continued to evolve and improve upon the developments made in the previous decade.

Here is a description of real-world computers from the 1960s:

IBM System/360: Introduced in 1964, the IBM System/360 was a groundbreaking series of computers that offered a wide range of models to suit different applications and computing needs. It was a family of compatible computers, which means software and peripherals could be shared across different models. The System/360 used transistors and integrated circuits, offering improved performance and reliability compared to earlier machines.

DEC PDP-8: The Digital Equipment Corporation (DEC) PDP-8, released in 1965, was a minicomputer designed for general-purpose computing. It was smaller and more affordable than mainframe computers, making it popular for scientific research, education, and industrial applications. The PDP-8 utilized integrated circuits and magnetic core memory.

CDC 6600: Released in 1964, the Control Data Corporation (CDC) 6600 was considered one of the fastest computers of its time. Designed by Seymour Cray, it was the first supercomputer and featured advanced architecture that included pipelining and parallel processing. The CDC 6600 was widely used in scientific and research institutions for computationally intensive tasks.

UNIVAC 1108: The UNIVAC 1108, introduced in 1964, was a mainframe computer known for its reliability and high performance. It used transistor technology and magnetic core memory. The UNIVAC 1108 was used in a variety of scientific and commercial applications, including weather forecasting, nuclear research, and business data processing.

IBM 1130: Released in 1965, the IBM 1130 was a popular mid-range computer that offered a balance between affordability and performance. It used transistor technology and magnetic core memory. The IBM 1130 was commonly used in educational institutions, small businesses, and engineering applications.

During the 1960s, computers continued to shrink in size and become more powerful. Integrated circuits and transistors replaced vacuum tubes, making computers smaller, more reliable, and faster. Magnetic core memory was widely used for data storage, although magnetic tape and disk storage also became common.

Programming languages and software development advanced during this era. High-level programming languages such as Fortran, COBOL, and ALGOL were developed, making it easier for programmers to write complex programs.

The computers of the 1960s represented a significant leap forward in terms of performance, size, and capabilities. They were employed in various sectors and played a crucial role in scientific research, business data processing, and advancing computational technology.

Computers & Software – State of the Art: 1969

In 1969, computers and software were experiencing significant advancements, although they were still quite different from the sophisticated technologies we have today. Here is an overview of the state of the art during that time:

Computer Hardware: Mainframe computers dominated the computing landscape in 1969. These large and expensive machines were typically housed in dedicated computer rooms and were primarily used by governments, large corporations, and research institutions. Key mainframe manufacturers included IBM, CDC (Control Data Corporation), and Honeywell.

Operating Systems: Operating systems were evolving to manage the increasing complexity of computer systems. IBM’s OS/360, released in the mid-1960s, provided a comprehensive operating system environment for IBM mainframes. Other operating systems, such as Multics and ITS (Incompatible Timesharing System), were developed by research institutions to support timesharing and multi-user environments.

Programming Languages: Programming languages were advancing, offering higher-level abstractions for software development. FORTRAN (Formula Translation) and COBOL (Common Business-Oriented Language) were widely used for scientific and business applications, respectively. Additionally, the development of ALGOL 68, a general-purpose programming language, took place in the late 1960s.

Software Development: Software development processes were still in their early stages, with less emphasis on formal methodologies. Programmers typically worked closely with hardware and had a deep understanding of the underlying systems. Debugging and testing were done manually, and version control systems were not as prevalent as they are today.

Databases: The concept of databases was emerging, and hierarchical and network models were the primary database management systems. These models organized data in hierarchical or interconnected networks, providing efficient data retrieval and storage for large-scale applications.

Networking: The foundations of computer networking were being laid, primarily through projects like ARPANET (Advanced Research Projects Agency Network). ARPANET, initiated by the U.S. Department of Defense, connected multiple universities and research institutions, serving as a precursor to the modern internet.

Artificial Intelligence: The field of Artificial Intelligence (AI) was gaining attention, with researchers exploring topics like expert systems and machine learning. Early AI programs were developed, such as the ELIZA chatbot by Joseph Weizenbaum, which simulated human conversation.

User Interfaces: Most computer interactions were based on command-line interfaces, requiring users to have a good understanding of specific commands and syntax. Graphical user interfaces (GUIs) were in their infancy, and concepts like windows, icons, and pointing devices were just beginning to be explored.

The state of computers and software in 1969 reflected a period of rapid technological development and experimentation.

Mainframe computers were at the forefront, programming languages were advancing, and the groundwork for networking and AI was being laid. The era set the stage for future innovations and paved the way for the computing advancements that followed in subsequent decades.

Computers: 1970s

Computers in the 1970s marked another significant phase of advancement in computing technology.

Here is a description of computers from that decade:

DEC PDP-11: The Digital Equipment Corporation (DEC) PDP-11, introduced in 1970, was a widely used minicomputer. It featured a modular design and used semiconductor technology, including integrated circuits. The PDP-11 was known for its versatility and was popular in industries such as manufacturing, scientific research, and education.

IBM System/370: The IBM System/370, announced in 1970, was a mainframe computer series that offered a range of models to suit various computing needs. It introduced virtual memory and offered improved performance and reliability compared to earlier IBM mainframes. The System/370 was widely used in business, government, and scientific applications.

Cray-1: Developed by Seymour Cray and introduced in 1976, the Cray-1 was a supercomputer that pushed the boundaries of computational speed and performance. It utilized a unique vector processing architecture and liquid cooling system. The Cray-1 was primarily used in scientific and research institutions for complex simulations and calculations.

Apple II: Released by Apple Computer, Inc. in 1977, the Apple II was a popular microcomputer that played a significant role in the emerging personal computer market. It featured color graphics, a built-in keyboard, and expandable memory. The Apple II was instrumental in bringing computing to homes, schools, and small businesses.

VAX-11/780: Introduced by Digital Equipment Corporation in 1977, the VAX-11/780 was a powerful minicomputer that provided a high-performance and reliable computing platform. It employed virtual memory and featured a 32-bit architecture. The VAX-11/780 was widely used in scientific research, engineering, and business applications.

During the 1970s, computers continued to become smaller, more affordable, and more accessible to a broader range of users. Integrated circuits and microprocessors became increasingly prevalent, resulting in increased computing power and efficiency. Magnetic storage technologies like hard disk drives and floppy disks gained prominence for data storage, replacing magnetic core memory.

The 1970s also witnessed the development of significant programming languages and software. C programming language, developed by Dennis Ritchie at Bell Labs, became widely used, leading to the development of numerous software applications and operating systems.

The computers of the 1970s played a crucial role in driving technological advancements, enabling widespread adoption across various sectors and contributing to the foundation of modern computing as we know it today.

Computers & Software – State of the Art: 1979

By 1979, computers and software had made significant advancements compared to previous decades.

Here is an overview of the state-of-the-art during that time:

Computer Hardware: By 1979, computers had evolved from large mainframe systems to more compact and powerful machines. Microprocessors had become increasingly prevalent, leading to the development of personal computers. Companies like IBM, Apple, and Commodore were introducing consumer-friendly models, such as the IBM Personal Computer (PC), Apple II, and Commodore PET.

Operating Systems: Popular operating systems of the time included UNIX, developed by Bell Labs, and DEC’s VMS. These operating systems provided advanced features and multitasking capabilities, allowing users to run multiple programs simultaneously. However, the concept of graphical user interfaces (GUIs) was still in its early stages, with the Xerox Alto being one of the pioneers in introducing GUI elements.

Programming Languages: High-level programming languages had become more prevalent, offering improved abstraction and ease of use. Languages such as FORTRAN, COBOL, and BASIC were still widely used for scientific, business, and general-purpose programming. Additionally, the C programming language, developed by Dennis Ritchie at Bell Labs, had gained popularity and influenced the future development of software.

Software Applications: Word processing and spreadsheet applications were gaining traction in the late 1970s. VisiCalc, the first electronic spreadsheet software, was released in 1979, transforming financial analysis and data manipulation. WordStar, one of the earliest word processing programs, was widely used for creating and editing documents.

Networking: Local Area Networks (LANs) were emerging, enabling computer systems to be interconnected within organizations. Protocols such as Ethernet and Token Ring facilitated data sharing and resource sharing among networked computers. However, the concept of the Internet, as we know it today, was still in its early stages, with the ARPANET serving as a precursor to the modern network.

Graphics and Multimedia: Computer graphics were becoming more sophisticated, with advancements in rendering techniques and computer-aided design (CAD) software. However, multimedia applications and digital entertainment were still in their infancy, with limited capabilities for audio and video manipulation on computers.

Artificial Intelligence: AI research gained momentum in the 1970s, with the development of expert systems and knowledge-based systems. Projects like MYCIN, an expert system for medical diagnosis, demonstrated the potential of AI in specialized domains.

The state of computers and software in 1979 marked an important transition towards more accessible and user-friendly computing.

The emergence of personal computers, advancements in programming languages and applications, and the growing interest in networking and AI laid the foundation for future innovations and the eventual proliferation of technology in various aspects of society.

Significant Events: 1950-1979

Here is a list of significant events in computer, telecommunications and information management history from 1950 to 1979:

1950: The first coaxial cable for long-distance telephone communication is laid between New York and Philadelphia, greatly increasing the capacity and quality of voice transmission.

1951: UNIVAC I, the first commercially available computer in the United States, is installed at the United States Census Bureau, marking a significant milestone in automated data processing and information management.

1952: Grace Hopper develops the first compiler, known as the A-0 system, which translates high-level programming languages into machine code.

1954: IBM introduces the IBM 650, a widely used computer in business and scientific applications.

1956: The first transatlantic telephone cable, known as TAT-1, is inaugurated, allowing for direct telephone communication between North America and Europe.

1956: The term “artificial intelligence” is coined during the Dartmouth Conference, leading to the exploration of AI techniques for information processing and decision-making.

1956: John McCarthy develops LISP (LISt Processing), one of the first high-level programming languages specifically designed for artificial intelligence research.

1957: Sputnik 1, the first artificial satellite, is launched by the Soviet Union, leading to increased focus on space exploration and the development of computer systems to support space missions and calculations.

1958: Jack Kilby at Texas Instruments invents the integrated circuit, a crucial component for miniaturizing computer hardware.

1958: John McCarthy organizes the Dartmouth Conference, where the term “artificial intelligence” is coined, leading to significant advancements in AI software development.

1960: The concept of the relational database is introduced by Edgar F. Codd in his paper “A Relational Model of Data for Large Shared Data Banks,” laying the foundation for organized and efficient data storage and retrieval.

1961: Project MAC (Multiple Access Computer or Machine-Aided Cognition) is initiated at MIT, focusing on computer-based information management, time-sharing systems, and human-computer interaction.

1962: J.C.R. Licklider of MIT publishes a series of memos envisioning a global computer network, which eventually leads to the development of the Internet.

1962: The Telstar satellite, the first active communications satellite, is launched, enabling live television broadcasts and international telephone calls via space.

1962: The Cuban Missile Crisis occurs, during which computer-based simulations and calculations play a crucial role in decision-making processes and strategic planning by both the United States and the Soviet Union.

1964: IBM announces the IBM System/360, a family of compatible mainframe computers that revolutionizes computer architecture and software compatibility across different hardware models.

1965: Digital Equipment Corporation (DEC) releases the PDP-8, one of the first commercially successful minicomputers.

1965: The first commercial communications satellite, Intelsat I (Early Bird), is launched, establishing the International Telecommunications Satellite Organization (Intelsat) and expanding global communications capabilities.

1968: Douglas Engelbart demonstrates the “Mother of All Demos,” showcasing groundbreaking software and hardware innovations, including the mouse, hypertext, and collaborative editing tools.

1969: The Advanced Research Projects Agency Network (ARPANET), the precursor to the Internet, is established by the U.S. Department of Defense, connecting computers at multiple research institutions and laying the foundation for modern computer networking.

1969: The Apollo 11 mission successfully lands astronauts Neil Armstrong and Buzz Aldrin on the moon, with computer systems onboard the Lunar Module (LM) playing a critical role in navigation and landing.

1970: Edgar F. Codd publishes the paper “A Relational Model of Data for Large Shared Data Banks,” introducing the concept of relational databases, which revolutionizes data storage and management.

1970: The IBM System/370 Model 145 mainframe computer is introduced, featuring virtual storage capabilities that enhance the management and access of large amounts of data.

1970: The first Earth Day is celebrated, highlighting environmental issues and the need for data collection and analysis to understand and address global challenges. Computers are employed for environmental research and modeling.

1971: Intel introduces the first microprocessor, the Intel 4004, paving the way for the development of personal computers.

1971: The first email protocols, including ARPANET’s Network Control Protocol (NCP), are developed, revolutionizing the way people communicate and share information.

1971: Alan Kay at Xerox PARC develops the Smalltalk programming language and the concept of object-oriented programming (OOP), which becomes influential in software development.

1972: Dennis Ritchie develops the C programming language at Bell Labs, providing a powerful and flexible language for systems programming.

1973: Xerox PARC (Palo Alto Research Center) develops the Xerox Alto, a pioneering computer featuring a graphical user interface (GUI) and a mouse. The Xerox Alto becomes the first computer to offer desktop publishing capabilities, enabling the creation and manipulation of documents with text and graphics.

1973: The first mobile phone call is made by Motorola researcher Martin Cooper, using a handheld prototype phone in New York City.

1973: Robert Metcalfe invents Ethernet, a widely used networking technology that enables computers to communicate and share resources.

1973: The Yom Kippur War takes place in the Middle East, during which computer systems are used for military command, control, and communication, facilitating strategic decision-making and coordination of forces.

1974: The Altair 8800, one of the first personal computers, is introduced, sparking a wave of enthusiasm for home computing and laying the foundation for the personal computer revolution.

1975: IBM introduces the IBM 5100 Portable Computer, one of the earliest portable computers, providing users with more flexibility in managing and accessing information on the go

1975: Bill Gates and Paul Allen found Microsoft, a software company that becomes instrumental in the development of personal computer software.

1975: The public packet-switched network, X.25, is introduced, providing a standard for digital data communication and paving the way for modern packet-switched networks like the Internet.

1976: Steve Jobs and Steve Wozniak found Apple Computer, Inc. and release the Apple I, a pre-assembled personal computer.

1976: The first commercial relational database management system (RDBMS), called Oracle, is released by Relational Software Inc. (later renamed Oracle Corporation), revolutionizing the management of structured data.

1976: The United States celebrates its bicentennial, with computer technology employed in various aspects of the celebration, including data processing for organizing events and managing logistics.

1977: Commodore releases the Commodore PET, an all-in-one personal computer targeted at the education market.

1977: Tandy Corporation introduces the TRS-80, one of the first successful mass-produced personal computers.

1977: The Voyager spacecraft is launched, equipped with computer systems to navigate through the solar system, collect scientific data, and communicate with Earth, contributing to advancements in space exploration.

1978: The first computer bulletin board system (BBS) is created by Ward Christensen and Randy Suess, allowing users to communicate and exchange files.

1978: The first computer virus, known as the “Elk Cloner,” is created by Richard Skrenta, marking the beginning of computer malware.

1979: Seymour Cray introduces the Cray-1 supercomputer, renowned for its speed and vector processing capabilities.

1979: VisiCalc, the first spreadsheet software, is released for the Apple II, transforming financial and data analysis by providing efficient information management and calculation capabilities.

1979: The Cellular Technology Industry Association (CTIA) is formed to promote the development and adoption of cellular mobile communication systems.

These events represent significant milestones in computer, telecommunications and information management history during the period from 1950 to 1979, encompassing advancements in hardware, software, networking, and the emergence of personal computing, highlighting advancements in computer-based data processing, networked information exchange, database management systems, user interfaces, and the emergence of productivity software.

Computers: Fiction and Non-Fiction

Here is an extensive list of computer related fiction and non-fiction literature published between 1950 and 1979, including the author, date, and publisher information:

“I, Robot” by Isaac Asimov (1950) – Published by Gnome Press.

“The Adolescence of P-1” by Thomas J. Ryan (1977) – Published by Ace Books.

“Time Enough for Love” by Robert A. Heinlein (1973) – Published by G.P. Putnam’s Sons.

“Colossus” by D.F. Jones (1966) – Published by Random House.

“The Moon Is a Harsh Mistress” by Robert A. Heinlein (1966) – Published by G.P. Putnam’s Sons.

“The Shockwave Rider” by John Brunner (1975) – Published by Harper & Row.

“The Adolescence of Time” by W.R. Thompson (1970) – Published by Doubleday.

“Stand on Zanzibar” by John Brunner (1968) – Published by Doubleday.

“The Terminal Man” by Michael Crichton (1972) – Published by Knopf.

“The Two Faces of Tomorrow” by James P. Hogan (1979) – Published by Ballantine Books.

“The Cyberiad: Fables for the Cybernetic Age” by Stanisław Lem (1965) – Published by Harcourt Brace.

“The Computer Connection” by Alfred Bester (1975) – Published by Berkley Publishing Group.

“Shockwave: Countdown to Hiroshima” by Stephen Walker (2005) – Published by HarperCollins.

“Virtual Unrealities: The Short Fiction of Alfred Bester” by Alfred Bester (1997) – Published by Vintage Books.

“The Pritcher Mass” by Gordon R. Dickson (1972) – Published by Doubleday.

“Demon Seed” by Dean Koontz (1973) – Published by Viking Press.

“When HARLIE Was One” by David Gerrold (1972) – Published by Ballantine Books.

“Manna” by Marshall Brain (2003) – Self-published.

“The Adolescence of Time” by Victor Godwin (1969) – Published by Meredith Press.

“Spectre” by Stephen Laws (1989) – Published by Hodder & Stoughton.

“Computing Machinery and Intelligence” by Alan Turing (1950) – Published in the journal Mind, Oxford University Press.

“The Mathematical Theory of Communication” by Claude Shannon and Warren Weaver (1949) – Published by the University of Illinois Press.

“A Symbolic Analysis of Relay and Switching Circuits” by Claude Shannon (1938) – Published in the journal Transactions of the American Institute of Electrical Engineers.

“Programming a Computer for Playing Chess” by Claude Shannon (1950) – Published in the journal Philosophical Magazine.

“The Theory of Automata” by John von Neumann (1951) – Published in the journal Transactions of the American Mathematical Society.

“A Mathematical Theory of Communication” by Claude Shannon (1948) – Published in the Bell System Technical Journal.

“Introduction to Metamathematics” by Stephen C. Kleene (1952) – Published by North-Holland Publishing Company.

“Information Theory, Inference, and Learning Algorithms” by David MacKay (2003) – Published by Cambridge University Press. Although published in 2003, the book covers concepts from the period.

“The Art of Computer Programming” by Donald E. Knuth (1968 – ongoing) – Published by Addison-Wesley Professional.

“Programming Languages: Design and Implementation” by Alfred V. Aho and Jeffrey D. Ullman (1977) – Published by Prentice-Hall.

“Formal Languages and Their Relation to Automata” by John E. Hopcroft and Jeffrey D. Ullman (1969) – Published by Addison-Wesley.

“The Structure of Scientific Revolutions” by Thomas S. Kuhn (1962) – Published by the University of Chicago Press.

“On Computable Numbers, with an Application to the Entscheidungsproblem” by Alan Turing (1936) – Published in the Proceedings of the London Mathematical Society.

“The Art of Computer Programming, Volume 1: Fundamental Algorithms” by Donald E. Knuth (1968) – Published by Addison-Wesley.

“The Mythical Man-Month: Essays on Software Engineering” by Frederick P. Brooks Jr. (1975) – Published by Addison-Wesley.

“Theory of Self-Reproducing Automata” by John von Neumann (1966) – Published by the University of Illinois Press.

“Elements of the Theory of Computation” by Harry R. Lewis and Christos H. Papadimitriou (1981) – Published by Prentice-Hall.

“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern (1944) – Published by Princeton University Press.

“A Theory of the Learnable” by Leslie Valiant (1984) – Published in the journal Communications of the ACM.

“Information Retrieval: Data Structures & Algorithms” by William B. Frakes and Ricardo Baeza-Yates (1992) – Published by Prentice-Hall.

Please note that while some of these works were published before 1950 or after 1979, they contain significant contributions to computer fiction and theory and are relevant to the overall understanding of the field during the specified time period.

.