There are 10 types of problems covered in this section: Algorithmic, Non-Algorithmic, Finite Infinite, Computable, Non-Computable, Decidable, Undecidable, Tractable and Intractable.
Algorithmic problems are problems that can be solved a set of rules or a series of steps, kinda like a recipe for cooking. Non-Algorithmic problems are problems that can't be solved using a set of rules or some steps, they tend to be more opinion based questions, like "who would win in a fight" style questions (side note Batman always wins with sufficient planning time).
Now algorithmic problems can be divided up into 2 types (along with the initial 2) Finite and Infinite. Finite problems are problems with a finite number of inputs and are (eventually) solvable, infinite problems however have an infinite number of inputs and might be solvable or they might not.
A computable problem is a problem that can be computed, whereas non-computable ones can't be. This is because some problems can't be solved by a basic algorithm and tend to require a human to look at to see if they work or not.
A decision problem is a Yes/No algorithmic problem with any number of inputs, while a decidable problem is a decision problem with an answer eg. Is 10 divided by 5 an even number? An undecidable problem is a decision problem with no Yes/No answer eg. What's 10 divided by 5?
A tractable problem is a problem which can be solved within a reasonable amount of time (known as polynomial time) eg. organising a collection of cards. Intractable problems are problems that can't be solved withing a reasonable amount of time eg. the Traveling Salesman Problem
Friday, 14 December 2012
Monday, 3 December 2012
3.1.4 Turing Machine
In 1930 Kurt Gödel showed there are mathematical statements that could never be proved or disproved. Later Alan Turing showed that it wasn't possible to find out if a mathematical problem would be solvable or not.
Alan Turing was a British Mathematician who introduced the concept of the Turing Machine. A Turing Machine that could solve any problem that could be represented algorithimically. Using this machine he showed that it wasn't possible to determine if a pronlem had a solution or not, this means that we can never tell if a problem has a solution and we haven't put enough resourses into it or if it hasn't got a solution.
The work done by Turing is the basis from which the first computers were developed, this is shown by the fact anything a modern computer can do can be represented on a Turing Machine.
A Turing Machine has 6 components:
1) an input alphabet
2) an infinitely long tape
3) an output alphabet
4) a tape head that can read/write to the cells of the tape
5) a finite set of states, one of whihc is the start state
6) a set of instructions/program
Essentially it's a deterministic FSM with an infitely long tape attached.
A Turing Machine describes an algorithm and since computer programs are essentially algorithms a Turing Machine can run through and check the algortithms independant of the architecture of the machine. However as a result it's more general representation of the algorithm as when running from the machine the architecture makes decisions about how to store values which doesn't happen on the Turing Machine.
Finally Transition Rules are shorthand to show you what happens at each state of the Turing Machine:
Alan Turing was a British Mathematician who introduced the concept of the Turing Machine. A Turing Machine that could solve any problem that could be represented algorithimically. Using this machine he showed that it wasn't possible to determine if a pronlem had a solution or not, this means that we can never tell if a problem has a solution and we haven't put enough resourses into it or if it hasn't got a solution.
The work done by Turing is the basis from which the first computers were developed, this is shown by the fact anything a modern computer can do can be represented on a Turing Machine.
A Turing Machine has 6 components:
1) an input alphabet
2) an infinitely long tape
3) an output alphabet
4) a tape head that can read/write to the cells of the tape
5) a finite set of states, one of whihc is the start state
6) a set of instructions/program
Essentially it's a deterministic FSM with an infitely long tape attached.
A Turing Machine describes an algorithm and since computer programs are essentially algorithms a Turing Machine can run through and check the algortithms independant of the architecture of the machine. However as a result it's more general representation of the algorithm as when running from the machine the architecture makes decisions about how to store values which doesn't happen on the Turing Machine.
Finally Transition Rules are shorthand to show you what happens at each state of the Turing Machine:
δ(1,1)=(2,1,→) means
Transition Function(Current State (1) , Input Symbol (1)) = (Next State (2), Output Symbol (1), Movement (Right))
Friday, 23 November 2012
3.1.3 Finite State Machines
A Finite State Machine (FSM) is a machine which accepts an input and proccess an output.
There a 3 types of FSMs a Mealy Machine, a Moore Machine and a Finite State Automata.
A Finite State Automata (FSA) is a FSM which has a finishing state and only outputs a YES/NO answer based on which state the input string ends on.
A Mealy Machine is a FSM where the output is on the transistions opposed to the states.
A Moore Machine is a FSM where the output is on the states opposed to the transisitons.
A FSM can also be represented by a Transition Table.
Current State: S1 S1 S2 S2 S3 S3
Input: 0 1 0 1 0 1
Next State: S2 S3 S3 S3 S1 S1
Output: A B C B C B
This table shows that in State 1 if you input 0 the FSM would be A and the next State would be Sate 2.
The third way to represent a FSM is as a Transition Function. These have the standard formula of (Input Symbol, Current State) -> (Output Symbol, Next State). For example (1, S3) -> (A, S2) would mean that if you are currently in State 3 and if you input 1 the FSM will output A and move to State 2.
Finally there are 2 types of FSM/FSA, Deterministic and Non-Deterministic. Deterministic means there is only 1 transition for each input while Non-Deterministic means there's more than one possible transition per input allowing for more than 1 route round a State Transition Diagram.
Oh and also a Halting State is a state with no outputs.
There a 3 types of FSMs a Mealy Machine, a Moore Machine and a Finite State Automata.
A Finite State Automata (FSA) is a FSM which has a finishing state and only outputs a YES/NO answer based on which state the input string ends on.
A Mealy Machine is a FSM where the output is on the transistions opposed to the states.
A Moore Machine is a FSM where the output is on the states opposed to the transisitons.
A FSM/FSA can be represented in a variety of ways the most common is a State transition Diagram.
A FSM can also be represented by a Transition Table.
Current State: S1 S1 S2 S2 S3 S3
Input: 0 1 0 1 0 1
Next State: S2 S3 S3 S3 S1 S1
Output: A B C B C B
This table shows that in State 1 if you input 0 the FSM would be A and the next State would be Sate 2.
The third way to represent a FSM is as a Transition Function. These have the standard formula of (Input Symbol, Current State) -> (Output Symbol, Next State). For example (1, S3) -> (A, S2) would mean that if you are currently in State 3 and if you input 1 the FSM will output A and move to State 2.
Finally there are 2 types of FSM/FSA, Deterministic and Non-Deterministic. Deterministic means there is only 1 transition for each input while Non-Deterministic means there's more than one possible transition per input allowing for more than 1 route round a State Transition Diagram.
Oh and also a Halting State is a state with no outputs.
Friday, 16 November 2012
3.1.2 Comparing Algorithms
Algorithms has varying levels of complexity. This covers 6, Constant, Linear, Logarithmic, Polynomial, Exponential and Factorial.
Constant Complexity (also known as O(1)) is a problem where the time taken doesn't grow even if the input size varies. For example an algorithm that finds out if a number is even or odd is O(1) complexity as no matter how large the number is it, the time taken is the same. O(1) complexity doesn't have to be just a single line, ordering the first 15 records of a file for example will take the same amount of time if the file is 15 records long or 150 records long.
Linear Complexity (O(n)) is a problem which will take proportionally more time as the input gets bigger. For example ordering a file, ordering a file with 10 records will be about 5 times faster than ordering a file with 50 records.
Logarithmic Complexity (O(log n)) is a problem which will take longer the larger the input is but not to the extent of O(n). A binary search is a good example as the longer the list is the longer it will take to search but not as long as a Linear Search would take.
Polynomial Complexity (O(nk)) is a problem that takes longer the larger the input is and the increase in time is faster than the increase in n. Polynomial Complexity includes Quadratic Complexity O(n2) and Cubic Complexity O(n3). An example of O(n2) would be shaking hands with everyone in a room. With 2 people only 1 is needed but with 7 people 21 are needed.
Exponential Complexity (O(kn)) are problems where the solution can only be found for small values of n, an example would be the "The Chinese Emperor and The Warlord" the payment picked was the chessboard method which 1 grain of rice was placed on the first square, 2 on second, then 4 on the third etc. As the number of squares increase the time taken to calculate the grains of rice increases exponentially. For small inputs it can be worked out but for larger and larger inputs it can potentially take longer than the Earth is around for
Factorial complexity (O(n!)) are problems that can't be solved in a reasonable amount of time. An example would be the Travelling Salesman Problem, this is still being looked into at how to solve it the most efficiently, however it can be solved by O(kn) if Dynamic Programming is used however that still means that only the smallest versions of the problem can be solved, Quantum Computing will be able to help with these sorts of problems but that's still a way off as it's still in its infancy.
![]() |
A graph showing a visual representation of the various complexitiesin relations to each other. |
Constant Complexity (also known as O(1)) is a problem where the time taken doesn't grow even if the input size varies. For example an algorithm that finds out if a number is even or odd is O(1) complexity as no matter how large the number is it, the time taken is the same. O(1) complexity doesn't have to be just a single line, ordering the first 15 records of a file for example will take the same amount of time if the file is 15 records long or 150 records long.
Linear Complexity (O(n)) is a problem which will take proportionally more time as the input gets bigger. For example ordering a file, ordering a file with 10 records will be about 5 times faster than ordering a file with 50 records.
Logarithmic Complexity (O(log n)) is a problem which will take longer the larger the input is but not to the extent of O(n). A binary search is a good example as the longer the list is the longer it will take to search but not as long as a Linear Search would take.
Polynomial Complexity (O(nk)) is a problem that takes longer the larger the input is and the increase in time is faster than the increase in n. Polynomial Complexity includes Quadratic Complexity O(n2) and Cubic Complexity O(n3). An example of O(n2) would be shaking hands with everyone in a room. With 2 people only 1 is needed but with 7 people 21 are needed.
Exponential Complexity (O(kn)) are problems where the solution can only be found for small values of n, an example would be the "The Chinese Emperor and The Warlord" the payment picked was the chessboard method which 1 grain of rice was placed on the first square, 2 on second, then 4 on the third etc. As the number of squares increase the time taken to calculate the grains of rice increases exponentially. For small inputs it can be worked out but for larger and larger inputs it can potentially take longer than the Earth is around for
Factorial complexity (O(n!)) are problems that can't be solved in a reasonable amount of time. An example would be the Travelling Salesman Problem, this is still being looked into at how to solve it the most efficiently, however it can be solved by O(kn) if Dynamic Programming is used however that still means that only the smallest versions of the problem can be solved, Quantum Computing will be able to help with these sorts of problems but that's still a way off as it's still in its infancy.
Monday, 12 November 2012
3.1.1 Infomation Hiding and Abstraction
Information Hiding: Hiding the complexity of a system to simply the experience for the user.
Abstraction: The process of removing unnecessary details in the representation of a system.
To Abtract you Hide Information to create an Abstraction.
Levels of Abstraction
Application Level: Hides complex process(es) behind an icon.
High Level Programming Language: Hides low levels commands behind a single high level one eg. IF
Machine Code: Hides complex electronics behind a command eg. ADD
Electronics: Hides the physical atomic interactions behind electronic components.
Abstraction: The process of removing unnecessary details in the representation of a system.
To Abtract you Hide Information to create an Abstraction.
Levels of Abstraction
Application Level: Hides complex process(es) behind an icon.
High Level Programming Language: Hides low levels commands behind a single high level one eg. IF
Machine Code: Hides complex electronics behind a command eg. ADD
Electronics: Hides the physical atomic interactions behind electronic components.
Subscribe to:
Posts (Atom)