Tuesday 2 December 2014

Fellow CSC165 SLOGs

Check out my sidebar for some other CSC165 SLOGs that I think are well written and offer some good insight into the material in the course :)

Week 12

This week in CSC165, we learned more about countability and also had a brief intro to induction. We talked about how a set of numbers can be defined as countable - basically only if it can be listable (i.e. can be described as a list where each item has its own position).

We also talked about the concept of diagonalization.  This proves that rational numbers are countable too because you can put the elements into a large list. For example, the set of infinite decimals in the range of 0-1 is greater than the set of natural numbers. Cantor proved that even if we list what we think is all the decimals numbers, we will always miss one because you can keep decreasing the increments.

The technique of diagonalization is to round the tenth digit (i.e. add 1 or subtract 1 to the tenth digit, depending on whether the digit is less than 5 or greater than/equal to 5). Then, do the same to the hundredth digit and so on. This ends up looking like a diagonal traverse downwards. The process creates a new number in the end. This is pretty powerful because extending the set of numbers proves that something can be infinite.

We also briefly went over induction. This is a word that I've heard being used quite often, although I didn't know what it meant until now. Basically it means that we need to prove a statement for the set of all natural numbers.

The basis is that if p(0) returns True then so must p(n+1) for all n in the set of natural numbers. This would make p(0) the base case and p(n+1) is the inductive step. Therefore, every step after a step that returns True must also return True. What we learned so far seems simple enough, but from the horror stories I've heard about CSC236, this is only the beginning...

Saturday 22 November 2014

Week 11

This week we covered the halting problem. I'll admit that it was definitely hard to get my mind around it at first. In my science classes, if I have a hard time understanding something then I try to summarize it so here's my best shot: Given a description of how the program works and assume that it can be implemented, then decide whether the program ever finishes running continues to run forever. Essentially, decide whether a given program and input will run forever or eventually halt.

The program should be a yes or no question given the input. i.e. Is the number prime? Does the function reach this line? Are unicorns pink and fluffy? Algorithms are then solutions to the problem if it returns the right answer to the problem with a finite run time.

Before Alan Turing, the common belief was that we could always come up with a well defined solution for any program/input. The famous proof by Alan Turing shows that a general algorithm to solve the halting problem cannot be derived for ALL possible program-input pairs. That makes the halting problem undecidable (no solution).  When you think about it, this makes sense to me. Computers can't do everything.

Here is a variation of the halting proof for practice...

Prove that the following function cannot be computed:

Now to reduce halt to are_unicorns_pink_and_fluffy


Case 1: if f(I) halts then other_function(I) will return 'yes' so are_unicorns_pink_and_fluffy(f, I) returns True and halt(f, I) returns True
Case 2: if f(I) does not halt then other_function(I) will not return 'yes' so are_unicorns_pink_and_fluffy returns False and halt(f, I) returns False

Then this program is the correct implementation of halt so since are_unicorns_pink_and_fluffy is computable then halt is computable. However, this contradicts with the fact that halt is NOT computable. Therefore, are_unicorns_pink_and_fluffy can not be computable either.

Sunday 16 November 2014

Week 10

This week we learned about Big Omega notation, which is essentially just the run time of the best case scenario or the lower bound. The statement that we need to prove is the following format: 

∃c ϵ ℝ∃B ϵ ℕ, ∀n ϵ ℕ, n ≥B ⇒ f(n) ≥ cn2

So basically the same as the format for Big Oh with the exception of the very important inequality ->  f(n)  cnas opposed to f(n) less than or equal to cn

Therefore, cn2 is the lower bound in this case. Since this week wasn't a big stretch from the Big Oh proofs we did the last two weeks, I feel fairly comfortable with these proofs. Especially the first couple that were introduced, which were really basic. 

However, the week quickly got more difficult as we were asked to prove statements with antecedents that were conjunctions of two Big Oh statements and therefore the antecedent means that it's a multiplication of two Big Oh statements. My first approach to the problem was to write down the structure. Being able to have a structure makes it easier for me to see the big picture of what we're doing and focus on filling in what's necessary. 

Monday 10 November 2014

Week 9

More Big Oh stuff....

So before I jump in to the logic of the proofs, I thought I'd give an overview of the proof structure. I really like structure and computability and order (I'd probably be terrible at MAT335: Chaos and Fractals). Quick recape: Big Oh is used to evaluate worst case run time scenarios, Big Omega is used to evaluate best case run time scenarios. 

This week, we covered proofs of Big Oh. The basic structure of the statement we're trying to prove is the following: 

∃c ϵ ∃B ϵ ℕ, ∀n ϵ ℕ, n ≥B ⇒ f(n)  cn2

I've become pretty comfortable with these symbols in the last few weeks but at first, the variables are still sort of overwhelming because you have no idea what they mean. However, since c and B have existential quantifiers, they're really just two positive numbers (real and natural respectively) that we can choose to make our statement true. 

Basically f(n) is the maximum number of steps needed to be able to find the n value aka the worst case scenario run time. This has to be less than or equal to (bounded) by some cn.  

When you look at the statement and write the structure of the proof in the same way as we've been doing then it's not so intimidating anymore. 

Pick c = ________. Then c ϵ + #to introduce 
Pick B = ________. Then Bϵℕ #to introduce 
Assume n ϵ ℕ #to introduce 
                 Assume n ≥B #introduce antecedent
                #actual content of the proof here 
                 Then ≥B ⇒ f(n) ≥ cn2 #introduce ⇒
Then ∀n ϵ ℕ, n ≥B ⇒ f(n) ≥ cn#introduce 
Then ∃c ϵ ∃B ϵ ℕ, ∀n ϵ ℕ, n ≥B ⇒ f(n) ≥ cn#introduce 

Now I'm really glad we covered the notations and proof structure extensively at the beginning of the course because it helps tremendously with proving run time scenarios. This obviously a very important concept in computer science because it's essential to prove that we have the most efficient algorithm possible. 

Saturday 1 November 2014

Week 8

I think I understand the concept of Big Oh and Big Omega. It's not my first time hearing about it, although this is my first time seeing Big Oh and Big Omega proofs. The actual concept makes sense to me but the proof is what worried me. I searched up some proofs online and the jumps from one step to another are not steps that I would intuitively consider making. 

Also I can't imagine tracing through recursive functions. It was fairly easy to follow along using the functions Danny showed in class that involved iteration. However, I can't figure out how I would trace through a recursive function such as merge sort to determine the number of steps. It's hard enough already for me to understand most recursive functions and figure out what the output is, let alone the number of steps given a variable input. 

To get some practice, I decided to look at the recursive function that I am most familiar with - the Fibonnaci sequence. I googled the complexity and I found that it's 2^n so I'll try my best to justify it... 

To start off with, for the base case of n = 1 the answer is obvious O(1) 

Therefore, the next term in the sequence would be O(1) + T(2) and then O(1) + T(2) + T(3) or simplified to T(n) = T(n-1) + T(n-2) ..... + O(1) for the nth term. 

So if we work under the assumption that T(n-1) = O(2 ^ (n-1)) then we get this... 

T(n) = O(2^ (n-1)) + O(2^ (n-2)) ... + O(1) 

.... which can be simplified to around 2^n. I understand how the upper bound works in the sense that you're estimating up to account for the worst case but again, I can't form a proof for the tight bound case at this point. 

Saturday 18 October 2014

Week 6

This week we covered the dreaded delta epsilon proofs. I liked the way that Danny explained it in terms of "an enemy". I've actually seen these proofs in calculus before and I actually found it much more understandable in CSC165 than in calculus.

We also touched on the different ways to carry out a proof. Here's what I think of them so far...

Direct proofs (with transitivity): most common/straightforward strategy. One conclusion leads to another until you eventually get the statement you're looking for. It's definitely my go-to strategy when I'm first tackling a proof but obviously it's not necessarily the easiest method for all proofs.

Equivalent proof (using contrapositive): I definitely would never have come up with this strategy myself but after seeing it in class, I was pretty impressed by how useful it can be. I definitely still need to develop more intuition to know when to use this method though.

Proof of existence - usually used for proofs with sequences

Proof by contradiction - prove a statement by disproving its negation. I think this is definitely one of the most useful strategies we've learned so far. It's certainly very powerful when you are unable to figure out how to prove the statement but there is an obvious way to disprove the negation.

Non-boolean proofs - I think the most important thing is to be able to define the function in a logical way so you can manipulate it easily. Not completely sure when I would use it though.

Falsity of statements - similar to direct proof but with the negation. The most difficult part would be figuring out if you should prove the negation or show a contradiction in the original statement.

Proof by cases - I see this most commonly used with inequalities. I think I might be one of few people who actually enjoy doing these. Maybe not on a test since it would take quite a while, but I find them to be the most similar to computation like problems.

I think the trickiest part with these strategies is definitely figuring out which one to use. Especially in a test situation, the answer may not always be obvious at first glance.