Wednesday, April 2, 2014

Last Slog. Complexity (For marks)

The classes for CSC148 is over. Over the course of past a month and a half, we've been introduced to the concept of complexity in algorithms. Just a recap, a complexity of an algorithm is how the algorithm's run time as the size of the parameter gets large. This type of scaling is more apparent in the types of algorithms we have been dealing with, which is sorting. The reason for this is because the number of elements in a list takes on integer values so we can clearly see what happens to the number of steps as we increase the size of the list by one. The most common big- oh complexities are logarithmic, constant, linear, quadratic, cubic and so on. In the labs, we examined various sorts graphing the running-time for different scenarios of the list such as sorted, sorted backwards and unsorted. The sort can basically be divided into two kinds, the ones with average complexity of O(n^2) and the ones with average complexity of O(nlogn). Sorts such as insertions sort, selection sort, and bubble sort are all O(n^2) because they require the computer to find individual values whether it be finding the smallest value or switch out the largest value, they all require a portion of the list to be repeatedly looked after after each iteration. And the difference within these O(n^2) sorts depends on the the length of the portion they had to look at. For example bubble sort despite how the last numbers are sorted, it still goes through the switching algorithmns with the numbers staying in their same place, while for insertion sort it only looks at the unsorted portion and if the list is already unsorted then it would just compare a term with only its previous term, making it O(n) in the best scenario. And for the merge sort and quick sort, they are O(nlogn) because it partitions the list into two parts either using a pivot or just in halves and recursively repeat the process until we reach one element and arrange them in order. It's logn because it dpends on how many times n can be divided into two and n because the single element needs to be compared with other elements so it can be arranged. Furthermore, going through the minor tweeks such as creating a global variable for len(L) or making sure that bubble sort doesn't try to swap the sorted end has increased dramatically the run time of the sorts. To conclude, work with sorts in the labs has given me a better understand of complexity and has made me more aware as to how minor details can have great effect on the running time.

This was a good semester.

Sunday, March 23, 2014

Hints on Assignments and Sorts

Half of the portion of this week's lectures is focused on Assignment 2 Part 2. The prof used all of the monday class to provide substantial hints as to how the functions in regex_functions should be programmed. Some expressed dismay since they spent time beforehand figuring out what the code should be and then the prof tells the entire class about the solution, so they feel that their efforts are wasted since they could've just came to class and listened, and allocate the time they used to solve it on something else. However, personally, I think it is perfectly justified for the prof to give hints like he did in class, note that he didn't have such a session right after the assignment II is assigned but rather only three days before it is due. It is extremely helpful towards those who have thought heavily on the assignment over the past month and are still stuck due to a small issue; all they needed to finish their code are those few lines about the assignments on the prof's class sides. For some people it is not as if they have no idea how to code in python, but rather just a push to get them started.

On the second portion of the week, we looked at a few more kinds of sorts which perform significantly better than the ones we had before such as selection sort or insertion sort. The sorts we saw were quick sort and merge sort, they all require the partitioning of the list into smaller bits and call the code recursively to sort everything. The performance difference can be seen as the length of the list scales, the prof puts a timer in each of the code for all the sorts and merge sort comes out as the "best" sort using a fraction of the time as other sorts for list with long length. The focus of this week has been on the implementation of the merge sort, the prof using cards went through, step by step, each recursive step of a short list and translated it into python code giving the student some idea as to how these efficient sorts are programmed. In the second last week of school, I hope we can go into more details about the complexity of all the sorts and have a precise way in determining the big oh complexity of algorithms.

Tuesday, March 18, 2014

Complexity

This week a new topic was started, it was a topic which was covered moderately in CSC108 and it is complexity and run-times. Complex is a method of classification of programs based on the amount of steps it takes to complete a program depending on the amount of information that it process. The motivation for this concept is clear, as the complexity of a program is directly related with its performance, thus in order to modify or improve a program to make it more efficient, it is necessary to understand the level of complexity of the code. For example, in assignment one, to execute the recursive code to find the smallest number of moves, the complexity is O(n!),  and in class we were told that some students were able to solve the minimum number of moves for moving 1000 cheeses in a matter of seconds, while it takes me a full minute to find the minimum of 20 moves. Thus, modifying the algorithm that repeatedly runs in a code can greatly improve the performance. But how is the program's performance quantitified? The way to do this is by running time analysis, which puts the running time as a function of the size of the input for a specific program  and it can range for code that are constant for all inputs like appending to list to code that grows logarithmically like a binary search tree. The prof then went on to compare linear search and binary search, and various sorting methods to contrast the amount of time that algorithms of different complexity take to do the same problems. These difference the prof demonstrated in class sparked my interest in learn more about how to implement code with various complexities; it serves as a reminder that I should always keep the concept of efficiency in mind while programming and try to reduce the running time of accomplishing the goal of a function as small as possible.

Sunday, March 9, 2014

LinkedList

This week the professor dived further into the concept of linked list, the concept of linked list is not so difficult after studying recursion and trees for around a month now.  The past week is more so a cumulative review of all we learned rather than introducing new concepts. The objective is to create a different class to represent a linked list, which is similar to a tree except each node only has one children in its list. Following the class design steps, first is to figure out some of the inherent properties that the class should have which would be included in the initialization part. Since Linked List is basically a chain, we would want it to have a head and also a tail which we call rest which consists of the rest of the chain. Now with the basic structure figured out we include the desired methods such as prepend, adding a node to the front of the chain, or decapitate, removing a front node. Then the prof went on to define an alternative version of the LinkedList concept remain the same, but we added new methods that are commonly used in a list such as insertion, deletion at a desired index of the LinkedList. These were achieved through recursion as the functions calls on the function itself until it reaches the bases where it reaches the desired LinkedList position and uses prepend or decapitated depending on the method. This carefully analysis of the LinkedLink reinforces my knowledge on object oritented programing and recursion at the same time allowing me to furthermore review the concepts we've learned, and this will definitely help me in completing assignment two which requires similar manipulation of Trees and recursion.

Sunday, March 2, 2014

More On Recursion/Trees

The week leading up to the midterm has been focused on a new ADT named Trees. Though we have yet to see any major application of trees other than LinkedList (A tree with arity one), the foundations have been very well laid out. In its essence, Tree in python is a form of data structure that contains numerous data of the same structure that can contain even more pieces of data. An intuitive way to understand this data structure is through its name as observed in nature. A tree has a source where its branches can grow, and on each branches there can either be leaves or more branches and it goes on in this way forming the entirety of the Tree. In programming, the joints in which the branches grow are called "nodes" and the branches that grow from it are the "children" of the node, and the nodes at with no further branches are, naturally, "leaves". Since these Tree contain information that are very much like itself, a very natural way of defining a Tree structure is via recursion. To initialize a Tree, we define it's node with a value, and go on to define it's branches which can either be None by default, or another Tree structure, and eventually reach Trees with value at the node, but no children. A tree's children is represented as a list containing more Trees, the reason for this is, perhaps, for an easy access to each children Tree through the indexing of the list. Here we can see another benefit for recursion being that it is a natural way to define objects with fractal-like property. Not only is recursion used in the initialization of the Tree class, it is used in various method for its properties as well such as arity, height, number of leaves. The implementation to find all of these properties for a tree requires a recursive function until it reaches the bases case, the trees that do not have anymore children.  For example,  to find the number of leaves in the a tree, the easiest way and the cleanest way to do so, is to create a recursive function that goes to the children of children trees until one reaches a Tree without children then add one to the total number of leaves. Also, list comprehension is used in these functions as well to find things such as the sum, minimum and maximum. Overall, it seems that CSC148 is using what we learned in previous weeks cumulatively to introduce us to more complex concepts, in the near future I hope we can see how using the recursive structure of trees can be used to solve practical problems. Thanks for reading.

Sunday, February 2, 2014

Recursion

The main focus of this week's lab was on recursion. As a math student, I have had some previous experience with recursions in the context of mathematics when dealing with recursive formulas for sums and integration of trig functions. In general, recursion is a process that repeat its itself until a terminating condition or the base case is reached. (Base case as in the base 'formula/function' that does not require recursion).  In the CSC148 lectures, we mostly talk about recursive functions, which are functions in python that calls on itself in its function body. Personally, the key in getting the hang of recursive functions is to carefully trace the examples that Professor Heap has given in class and draw arrows to see at what point of the function we are at. By manually tracing the recursive process, though it may be slow at first, one's mind would start to get used to the repetition and see how it reaches the base case every time a function is ran. Once, it was clear how recursion works, I began to see how it can save lines and lines of code when programming something that requires a repetitive process. However, after lectures when I looked over some of the examples the prof provided during class, these functions can be solved with an iterative solution as well. For example, on the function which checks depth of a list using while loops as well. So I was wondering if there are any preferred methods when one encounters a problem where the iterative method and the recursive method uses the same amount of lines. After some mild research, it can be said that iteration is a type of method that processes a problem repeatedly until it is done while recursion continuously breaks a problem into smaller/simpler pieces until a base case is reached. Thus, when dealing with something that is like a "tree" with a base case in the middle like the turtle example which the Prof has shown, it would be more natural to apply recursion. To conclude, the introduction to recursion in programming has now equipped me with another tool, of which I can apply to solve real life problems, namely, the assignment that is due in two weeks.

Wednesday, January 22, 2014

Object Oriented Programming

The main focus of the past 2 weeks of class had been on object oriented programming with an emphasis on class and its methods. As a student that comes from CSC108 where the topics have been briefly mentioned in class, I was able to gain a more in-depth understanding on these concepts. In lectures, we were given examples such as the class called Point, and through the creation of its properties and methods, it gives a clear demonstration as to how Classes should be made, when to make it, and how to manipulate it to the programmer's needs. Furthermore, the main ideology that was conveyed in class while programming was the mentality in being "lazy", meaning that one should be mindful in avoiding codes that are redundant or inefficient. The two concepts that was given to the class encompassing this ideology were list comprehensions and recursion. Despite how each of these are not heavily covered, but we can see through the usages of them in simple examples how they were able to reduce long lines of code to something as short as a return statement. In general, the past few lectures were focused on establishing the basic concepts and right mindset to object oriented programming through straightforward examples, and they were most certainly interesting and helpful.