What a great turn out for SQL Saturday Chicago! Attended Brent Ozark's session in the morning, a brilliant, funny, entertaining class about how to think the sql engine! Here's a pic from the class:
Saturday, April 26, 2014
Friday, April 25, 2014
Post Easter Weekend Transaction Level and Isolation
Ok, so I won't lie... chapter 12 was about 70 pages long, and I only read it twice. This chapter along with chapter 6 are very conceptual, which make them a slog to get through. I enjoy the deep thinking involved with this, but it is difficult to keep up with every idea, and execution. I'll be working through the practice exercises in the book this evening, and will try to remember the lingo for the test (QUE THE FLASH CARDS!)
It was largely about the idea of transactions. A transaction is a process of getting things done... that's a very dumbed down definition but either a transaction is completed successfully, or not. In SQL there are different transaction levels that can be set before a running code. I'm going to try to explain the five transaction levels introduced in the book in my own words:
Read Committed: The default Isolation level for SQL Server. A query can only read data that has been committed
Read Uncommitted: Removes shared locks in select statement, reads uncommitted data; DIRTY READS!
Read Uncommitted Snapshot: (RCSI) is the default Iso level for Azure, stores original data in Tempdb so that it can be reread later
Repeatable Read: data in a transaction can be read later in the transaction, updates and deletes of rows that are selected are prevented, but it can still read new rows PHANTOM READS
Snapshot: Just like RCSI, cannot see phantom reads
Serializable: Highest Iso level, all reads are repeatable, new rows are not allowed in underlying tables
It was largely about the idea of transactions. A transaction is a process of getting things done... that's a very dumbed down definition but either a transaction is completed successfully, or not. In SQL there are different transaction levels that can be set before a running code. I'm going to try to explain the five transaction levels introduced in the book in my own words:
Read Committed: The default Isolation level for SQL Server. A query can only read data that has been committed
Read Uncommitted: Removes shared locks in select statement, reads uncommitted data; DIRTY READS!
Read Uncommitted Snapshot: (RCSI) is the default Iso level for Azure, stores original data in Tempdb so that it can be reread later
Repeatable Read: data in a transaction can be read later in the transaction, updates and deletes of rows that are selected are prevented, but it can still read new rows PHANTOM READS
Snapshot: Just like RCSI, cannot see phantom reads
Serializable: Highest Iso level, all reads are repeatable, new rows are not allowed in underlying tables
SQL Saturday Chicago begins tomorrow!
After rearranging my schedule I'll be attending SQL Saturday 291... CHICAGO STYLE! I'm excited for this because with a lot of stuff going on right now, I wouldn't say that my studies have been lacking, but more or less stalled. I think that's a perception thing on my part because I think about my progression towards passing this exam as an exercise of 'doing' but the last few chapters have been more or less 'absorbing'. Understanding concepts, nailing down definitions, but not the 'rada-tap-tap' on the keyboard that I'm used to.
I've given thought to reading the rest of the book through one time, then going back and revisiting things that I either need to learn more about or need to remind myself of.
As of now, I'm reading through chapter 13 that I understand fairly well. Implementing stored procs is really my bread and butter on this exam since that is mostly what my development background has taught me how to do. A quick glance through this, and then on to chapter 14 in which I'm very eager to learn about query performance tuning.
Cheers all, I'll take a pic of SQL Saturday tomorrow morning!
I've given thought to reading the rest of the book through one time, then going back and revisiting things that I either need to learn more about or need to remind myself of.
As of now, I'm reading through chapter 13 that I understand fairly well. Implementing stored procs is really my bread and butter on this exam since that is mostly what my development background has taught me how to do. A quick glance through this, and then on to chapter 14 in which I'm very eager to learn about query performance tuning.
Cheers all, I'll take a pic of SQL Saturday tomorrow morning!
Friday, April 18, 2014
Insert, Update, Delete Sandbox fun
After reading through merging, inserting, updating, and deleting I performed the exercises in the book, but it didn't stick in my head as well I thought it would. For that reason I began to play on my own using the database that I had created in the past from the contacts in my phone. Using the phone number as the primary key I had two tables (the old table, and a new table) and I began to update, and insert new contacts on the old table.
Sandbox practice is a great way to learn, and can be more relative in the real world as well. I will post the DDL from my update statements when I get back to my home cpu...
Cheers!
Sandbox practice is a great way to learn, and can be more relative in the real world as well. I will post the DDL from my update statements when I get back to my home cpu...
Cheers!
Monday, April 14, 2014
Chapter 11 practice exercises from book.. do over again!
After reading through chapter 11 a few times and getting my head around the ideas I started doing the exercises in the book tonight... all I can say are they are great!
I think this is about the third time that I have written this, but the book does a wonderful job keeping things simple at first then expanding on the ideas. Chapter 11's exercises def prove this, and I have noted to my self (by way of this post in this blog) that I will go back and do these exercises again before test time.
On to chapter 12!
I think this is about the third time that I have written this, but the book does a wonderful job keeping things simple at first then expanding on the ideas. Chapter 11's exercises def prove this, and I have noted to my self (by way of this post in this blog) that I will go back and do these exercises again before test time.
On to chapter 12!
Saturday, April 12, 2014
Merge read, not written...
I said that I would write about merge last night, but had family over so was a bit busy cooking/cleaning last night. Perhaps I'll have time to write down my thoughts on merge tonight.
I did however read chapter 11 yesterday (the first two lessons). I always read through a chapter once quickly knowing that I won't absorb every piece of information, but instead exposing myself to the information that the chapter involves. This seems to be the best way for me to learn new material... read through once to get the lay of the land, let it sink in for a day or two, then go back in and read through it again and do the examples provided by the book, then if needed go through it a third time. If done correctly this can be completed in 5 days to a week. Remembering the content is the more challenging part of it in which I need to go back at least once a week and review what the chapter covered. Reviewing and doing homework is where my attention has been recently as I try to learn new content and keep the maintain my performance with the old.
Anywho, yes, more chapter 11 and merge statements tonight (fingers crossed)!
UPDATE, now written and updated in previous post
I did however read chapter 11 yesterday (the first two lessons). I always read through a chapter once quickly knowing that I won't absorb every piece of information, but instead exposing myself to the information that the chapter involves. This seems to be the best way for me to learn new material... read through once to get the lay of the land, let it sink in for a day or two, then go back in and read through it again and do the examples provided by the book, then if needed go through it a third time. If done correctly this can be completed in 5 days to a week. Remembering the content is the more challenging part of it in which I need to go back at least once a week and review what the chapter covered. Reviewing and doing homework is where my attention has been recently as I try to learn new content and keep the maintain my performance with the old.
Anywho, yes, more chapter 11 and merge statements tonight (fingers crossed)!
UPDATE, now written and updated in previous post
Friday, April 11, 2014
Ch. 11 Sequence/Identity and Merge
Chapter 11 lessons one and two deal with sequence/identity and merge statements respectfully.
I've learned about these in the past, but it was great to review these ideas again learn a few new things about them. For example, the indentity property has been around in SQL for sometime now, but in 2012 Sequence was introduced. There are a few key differences between sequence and identity which I will get to in a minute, but first it's important to understand what these properties are used for.
Identity and Sequence both automatically assign a value to the column in a table once the row is inserted. Only one column in a table can have an identity property.
As of now the most common way that I see the identity property used has been in practice exercises such as my example of finding the sum of all rows in the table back in my chapter 5 homework (I remember this because I just did the homework Tuesday :)).. In that example you first had to create a table, then insert values; here's how that was done:
USE TestDB
IF OBJECT_ID('sumquestion') IS NOT NULL DROP TABLE sumquestion;
CREATE TABLE sumquestion (
ID INT IDENTITY (1,1)
Name VARCHAR(10)
);
INSERT INTO sumquestion (Name)
VALUES ('Alpha'), ('Beta'), ('Beta'), ('Charlie'), ('Charlie'), ('Charlie');
SELECT * FROM sumquestion
Notice that when I inserted values into table sumquestion that I didn't have to insert for ID, just name, because the IDENTITY property on the ID column automatically assigns a value for the row. Placing (1,1) next to INT tells the operator to start at number 1, and increase each value by 1 (this is the default).
Important items that are pointed out in this section are:
--
I've learned about these in the past, but it was great to review these ideas again learn a few new things about them. For example, the indentity property has been around in SQL for sometime now, but in 2012 Sequence was introduced. There are a few key differences between sequence and identity which I will get to in a minute, but first it's important to understand what these properties are used for.
Identity and Sequence both automatically assign a value to the column in a table once the row is inserted. Only one column in a table can have an identity property.
As of now the most common way that I see the identity property used has been in practice exercises such as my example of finding the sum of all rows in the table back in my chapter 5 homework (I remember this because I just did the homework Tuesday :)).. In that example you first had to create a table, then insert values; here's how that was done:
USE TestDB
IF OBJECT_ID('sumquestion') IS NOT NULL DROP TABLE sumquestion;
CREATE TABLE sumquestion (
ID INT IDENTITY (1,1)
Name VARCHAR(10)
);
INSERT INTO sumquestion (Name)
VALUES ('Alpha'), ('Beta'), ('Beta'), ('Charlie'), ('Charlie'), ('Charlie');
SELECT * FROM sumquestion
Notice that when I inserted values into table sumquestion that I didn't have to insert for ID, just name, because the IDENTITY property on the ID column automatically assigns a value for the row. Placing (1,1) next to INT tells the operator to start at number 1, and increase each value by 1 (this is the default).
Important items that are pointed out in this section are:
- There are differences when using delete and truncate and the identity property. Using delete in the where clause does not affect the value of identity, however truncate does.
- Identity does not guarantee the uniqueness of values.. if you try to insert a value into a table and the current seed of identity is 5, and the insert fails, this insert would have taken the value 6.. but because it failed, 6 will be skipped. Try to make another insert and it will insert 7, your identity column will now show 4, 5, 7, 8, etc...
- The following functions help you with identity: SCOPE_IDENTITY returns the last identity value in the session in the current scope. @@IDENTITY returns the last identity value generated in your session regardless of scope. IDENT_CURRENT accepts a table as input and returns the last identity value generate in the input table regardless of session
As far as SEQUENCE goes, it's important to remember that sequence is actually an object in the DB, not just a function. Here are the key differences between IDENTITY and SEQUENCE according to the book:
- Identity is tied to a particular column in your table so you can't remove the property and add it to something else
- Identity is table specific, sequence is not (because it's a file object)
- With identity you need to create a row before the value is assigned, this is not true with sequence
- You can't update the identity column
- Identity can't support cycling (meaning once it runs out of values it can't go back to the start)
- Truncate resets the identity value
The way that I remember the differences between the two is that first off SEQUENCE is a newer to SQL so so it comes with more customization. For example with sequence you're able to tell it to start at a minimum value, how much to increment by, a max value, whether it's allowed to cycle or not cycle, and a starting value. An example of the syntax of SEQUENCE is something like this:
CREATE SEQUENCE Sales.SequOrderIDs AS INT
MINVALUE 1
CYCLE;
--
So, as far as merge statement, it's just as it seems in the spoken word… it deals with moving data from one table to another based on certain clauses. Here is how tech net describes merge which I believe is very concise and simple:
The MERGE syntax consists of five primary clauses:
- The MERGE clause specifies the table or view that is the target of the insert, update, or delete operations.
- The USING clause specifies the data source being joined with the target.
- The ON clause specifies the join conditions that determine where the target and source match.
- The WHEN clauses (WHEN MATCHED, WHEN NOT MATCHED BY TARGET, and WHEN NOT MATCHED BY SOURCE) specify the actions to take based on the results of the ON clause and any additional search criteria specified in the WHEN clauses.
- The OUTPUT clause returns a row for each row in the target that is inserted, updated, or deleted.
Tuesday, April 8, 2014
Lunch Time HW 4-8 As planned
Here is my lunch time project...
/*USE TSQL2012
SELECT categoryid, MIN(unitprice) AS mn
FROM production.products
GROUP BY categoryid
--
WITH cte AS(
SELECT categoryid, MIN(unitprice) AS mn
FROM production.products
GROUP BY categoryid)
SELECT p.productid, p.productname
FROM Production.Products AS p
INNER JOIN cte AS c
ON c.categoryid = p.categoryid
AND p.unitprice = c.mn
--WHERE p.unitprice = c.mn
--
SELECT s.supplierid, p.productname, p.productid, p.unitprice
FROM production.Suppliers AS s
INNER JOIN production.Products AS p
ON p.supplierid = s.supplierid
WHERE s.country LIKE 'Japan'
--
SELECT c.custid, c.companyname, o.orderdate
FROM sales.Customers AS c
RIGHT OUTER JOIN sales.Orders AS o
ON o.custid = c.custid
WHERE o.orderdate = '20070212'
--
IF OBJECT_ID('anyname','if') IS NOT NULL DROP FUNCTION anyname
CREATE FUNCTION anyname (@empid, INT)
RETURN TABLE AS();
--
SELECT TOP (2) productid, productname, supplierid, MIN(unitprice)
FROM Production.Products
WHERE supplierid = 1
GROUP BY productid, productname, supplierid
ORDER BY MIN(unitprice) ASC
*/
--
WITH cte AS(
SELECT TOP (2) productid, productname, supplierid, MIN(unitprice) AS mn
FROM Production.Products
WHERE supplierid = 1
GROUP BY productid, productname, supplierid
ORDER BY MIN(unitprice) ASC)
SELECT s.country, s.supplierid, A.*
FROM production.Suppliers AS s
CROSS APPLY cte AS A
WHERE s.country = 'Japan'
/*USE TSQL2012
SELECT categoryid, MIN(unitprice) AS mn
FROM production.products
GROUP BY categoryid
--
WITH cte AS(
SELECT categoryid, MIN(unitprice) AS mn
FROM production.products
GROUP BY categoryid)
SELECT p.productid, p.productname
FROM Production.Products AS p
INNER JOIN cte AS c
ON c.categoryid = p.categoryid
AND p.unitprice = c.mn
--WHERE p.unitprice = c.mn
--
SELECT s.supplierid, p.productname, p.productid, p.unitprice
FROM production.Suppliers AS s
INNER JOIN production.Products AS p
ON p.supplierid = s.supplierid
WHERE s.country LIKE 'Japan'
--
SELECT c.custid, c.companyname, o.orderdate
FROM sales.Customers AS c
RIGHT OUTER JOIN sales.Orders AS o
ON o.custid = c.custid
WHERE o.orderdate = '20070212'
--
IF OBJECT_ID('anyname','if') IS NOT NULL DROP FUNCTION anyname
CREATE FUNCTION anyname (@empid, INT)
RETURN TABLE AS();
--
SELECT TOP (2) productid, productname, supplierid, MIN(unitprice)
FROM Production.Products
WHERE supplierid = 1
GROUP BY productid, productname, supplierid
ORDER BY MIN(unitprice) ASC
*/
--
WITH cte AS(
SELECT TOP (2) productid, productname, supplierid, MIN(unitprice) AS mn
FROM Production.Products
WHERE supplierid = 1
GROUP BY productid, productname, supplierid
ORDER BY MIN(unitprice) ASC)
SELECT s.country, s.supplierid, A.*
FROM production.Suppliers AS s
CROSS APPLY cte AS A
WHERE s.country = 'Japan'
Monday, April 7, 2014
Updating HW sets
I haven't written for a while because I continue to work through the homework sets, specifically the ones that I posted on chapters 6-10 (even though I haven't even written HW for chapters 9 and 10 at this point). My main focus has been on the XML part of it, because while I previously had been working with a lot of element centric xml, the HW that I posted contained an element/attribute combination. This totally through me for a loop... I was having a hard time remember how to find a value in XML that was an attribute, but the more I worked on it I began to remember that attributes are the easy ones because they are always in quotes, and can be queried by "@"! Once I figured that out everything made a lot more sense, but it did take me a number of trips through the book, as well as a post on SSC.
Anyways, that's my update, more posts as soon as I get through my self prescribed HW set! I think I'll spend time tomorrow during the day doing chapter 4, I'm also going to hold my self accountable by posting the HW that I do from chapter 4 on here tomorrow afternoon.
Cheers!
Anyways, that's my update, more posts as soon as I get through my self prescribed HW set! I think I'll spend time tomorrow during the day doing chapter 4, I'm also going to hold my self accountable by posting the HW that I do from chapter 4 on here tomorrow afternoon.
Cheers!
Sunday, April 6, 2014
Answers 6-10
Here's what I've got so far, working on the rest of Q's and A's
2) LIKE and CONTAINS are similar but against large text scans CONTAINS works better (faster). LIKE can't perform scans on formatted binary data, LIKE performs a pattern scan of each table, CONTAINS "appl(ies) specific transformations at index and query time", it's smarter and goes faster for big stuff.
3) Stemming is a fancy way of saying a word has a different tense... Past tense, future tense, etc (are there any others?) So a word like "drink" would be the Stemmer for, drank, drunk, drinking, drinks. This goes hand in hand with the last question about CONTAINTS() because CONTAINS can find the inflection of drink (meaning CONTAINS can find drank, drunk, drinking, drinks when you just tell it to find "drink"). Interestingly enough CONTAINS can also find a synonym for another word, meaning if I type in the word car, it could find the word "auto" or "automobile" if those words are in the Thesaurus that it uses (and you can edit the Thesaurus if you want).
Again with the XML....
4) Ok, so the main differences in the different types are that they each get laid out differently... recognizing how the nodes look, and their attributes is important, and something that I should be able to visualize... in case I can't here is some data that can help me:
IF OBJECT_ID('XMLForHomework') IS NOT NULL DROP TABLE XMLForHomework
CREATE TABLE XMLForHomework
(
ID INT PRIMARY KEY IDENTITY (1,1),
FirstName varchar(20),
LastName varchar(20),
FavoriteTeam varchar(10)
)
-- Add some stuff
INSERT INTO XMLForHomework VALUES
('Ben', 'Jamin', 'Tribe'),
('Chris', 'Smith', 'Sox'),
('Beth', 'Smith', 'Cubs'),
('Your', 'Mom', 'Yanks'),
('Your', 'Dad', 'Mets')
--
SELECT * FROM XMLForHomework FOR XML AUTO
SELECT * FROM XMLForHomework FOR XML RAW
SELECT * FROM XMLForHomework FOR XML RAW, ELEMENTS
SELECT * FROM XMLForHomework FOR XML PATH('Path')
--
DROP TABLE XMLForHomework
After looking at the different outcomes of the XML I should be able to tell the differences between them. My understanding is that you may need different forms of XML in different situations, and for different applications.
5) So, check this out, I kept on getting frustrated with this question because I wrote the first batch of code, and kept on getting an error, then fixed my "bugs" with the second line of code.. can you find the 3 differences between them? The correct answer is the bottom query.
USE AdventureWorks2012
WITH XMLNAMESPACE(
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/Resume' AS ns)
SELECT [Resume].value('(/ns:Resume/ns:EMail)', 'nvarchar(50)') AS emal
FROM HumanResources.JobCandidate;
--
WITH XMLNAMESPACES(
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/Resume' AS ns)
SELECT [Resume].value('(/ns:Resume/ns:EMail)[1]','varchar(50)') AS emal
FROM HumanResources.JobCandidate;
6) WITH XMLNAMESPACES(
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/Resume' AS ns)
SELECT [Resume].value('(/ns:Resume/ns:Name/ns:Name.First)[1]', 'varchar(50)') AS firstname,
[Resume].value('(/ns:Resume/ns:Name/ns:Name.Last)[1]', 'varchar(50)') AS lastname
FROM HumanResources.JobCandidate
WHERE
[Resume].exist('/ns:Resume/ns:Education/ns:Edu.Location/ns:Location/ns:Loc.State[.="IA "]')= 1
7)
a. So a note on this one, I got really confused when I started working with attribute centric XML because most of the work I had done up to this point had been in element centric. I actually had to post this question on SSC to obtain a hint on the answer.
SELECT
@x.value('(root/production.Categories[@categoryname="Beer"]/@categoryid)[1]','INT') AS catid,
@x.value('(root/production.Categories[@categoryname="Beer"]/@categoryname)[1]', 'NVARCHAR(50)') AS catname
b. This is another tough one because it's important how you define "everything" and I wasn't very specific... So I meant it as how do you return the entire XML document? Which would be this:
SELECT @x.query('*') AS everything
c. The key to problem c is to shred everything and make a big table first with all of the data in it (a cte) then query that table, here's what that would look like:
WITH CTE AS(
SELECT
categoryid = Foo.value('(@categoryid)','NVARCHAR(250)'),
descrip = Foo.value('(@description)','NVARCHAR(250)'),
categoryname = Foo.value('(@categoryname)','NVARCHAR(250)')
FROM @x.nodes('/root/production.Categories') AS Tbl(Foo)
)
SELECT * FROM CTE
WHERE categoryid BETWEEN 9 AND 10
8) There are two types of indexes for XML a primary index, then after you have a primary index you can have a secondary index that has three different flavors: Path, Value, and Property. The primary index "contains a shredded persisted representation of the XML values"... basically for each node a row is created, then you can search the rows. Path in the secondary index speeds up queries that use exist() or value(). Value helps if you use queries that are value based or have wildcards. Property is useful for queries that "retrieve one or more values from individual XML instances by using the value() method."
9) The differences between element centric and attribute centric XML in my head are first off the way that they look... element centric uses a lot of nodes, which are things that look like this <root> value</root>, and attribute centric looks like this root = "value"... or something to that affect. Either are fine, however with element centric XML you may want to describe a namespace.
10-12) /*USE TestDB
IF OBJECT_ID('QuestionTen') IS NOT NULL DROP TABLE QuestionTen
--
CREATE TABLE QuestionTen(
PlayerID INT NOT NULL,
FirstName VARCHAR(25) NULL)
--
ALTER TABLE QuestionTen
ADD Lastname VARCHAR(25) NOT NULL
--
ALTER TABLE QuestionTen
ADD CONSTRAINT PK_PlayerID PRIMARY KEY(PlayerID)*/
13) Well, with a primary key you can only have one per table, the primary key can not have a NULL value, and it must be unique. A unique key is just like it sounds, it is unique but it can be NULL ( I think there can only be one null unique key per table though, because then it would still be unique).
14) Sticking with table "ChapterTen" here is what adding a default constraint would look like:
ALTER TABLE QuestionTen
ADD AddedDate date DEFAULT GETDATE()
15) Again, sticking with table "ChapterTen" here is what adding a check constraint would look like:
ALTER TABLE QuestionTen
ADD CONSTRAINT Chk_FirstName CHECK (Firstname <> 'Bob')
--This Insert will error out because FirstName = Bob
INSERT INTO dbo.QuestionTen (PlayerID, FirstName, LastName)
VALUES (1, 'Bob', 'Smith')
8) There are two types of indexes for XML a primary index, then after you have a primary index you can have a secondary index that has three different flavors: Path, Value, and Property. The primary index "contains a shredded persisted representation of the XML values"... basically for each node a row is created, then you can search the rows. Path in the secondary index speeds up queries that use exist() or value(). Value helps if you use queries that are value based or have wildcards. Property is useful for queries that "retrieve one or more values from individual XML instances by using the value() method."
9) The differences between element centric and attribute centric XML in my head are first off the way that they look... element centric uses a lot of nodes, which are things that look like this <root> value</root>, and attribute centric looks like this root = "value"... or something to that affect. Either are fine, however with element centric XML you may want to describe a namespace.
10-12) /*USE TestDB
IF OBJECT_ID('QuestionTen') IS NOT NULL DROP TABLE QuestionTen
--
CREATE TABLE QuestionTen(
PlayerID INT NOT NULL,
FirstName VARCHAR(25) NULL)
--
ALTER TABLE QuestionTen
ADD Lastname VARCHAR(25) NOT NULL
--
ALTER TABLE QuestionTen
ADD CONSTRAINT PK_PlayerID PRIMARY KEY(PlayerID)*/
13) Well, with a primary key you can only have one per table, the primary key can not have a NULL value, and it must be unique. A unique key is just like it sounds, it is unique but it can be NULL ( I think there can only be one null unique key per table though, because then it would still be unique).
14) Sticking with table "ChapterTen" here is what adding a default constraint would look like:
ALTER TABLE QuestionTen
ADD AddedDate date DEFAULT GETDATE()
15) Again, sticking with table "ChapterTen" here is what adding a check constraint would look like:
ALTER TABLE QuestionTen
ADD CONSTRAINT Chk_FirstName CHECK (Firstname <> 'Bob')
--This Insert will error out because FirstName = Bob
INSERT INTO dbo.QuestionTen (PlayerID, FirstName, LastName)
VALUES (1, 'Bob', 'Smith')
Thursday, April 3, 2014
Going through homework and reviews with limited time
The last few days I've been scrolling through my previous posts reviewing things such as pivoting, ordering, windowing, and the like in an effort to refresh my brain on the syntax for T-SQL. One of the questions in chapter 5 I think it was on grouping has to do with creating a table then inputting data like 'alpha', 'beta', etc which I think is good practice in itself... how to create a table, because creating views is similar although you have a few more options with a view.
I'm getting antsy to take the exam again but I know i need to be patient and work through what I've done so far. Time is my biggest obstacle right now, but I believe that if something is important to you, you'll make time for it. This is important to me, so working on homework during lunch, or at night is my best option right now. That's why I haven't been doing many posts on here...
Ok back to working on Chapter 5 "all lesson sets"
Cheers!
I'm getting antsy to take the exam again but I know i need to be patient and work through what I've done so far. Time is my biggest obstacle right now, but I believe that if something is important to you, you'll make time for it. This is important to me, so working on homework during lunch, or at night is my best option right now. That's why I haven't been doing many posts on here...
Ok back to working on Chapter 5 "all lesson sets"
Cheers!
Tuesday, April 1, 2014
Momentum from Saturday carrying me through exam & #sqlhelp
I'm still feeling the Saturday momentum that I got this weekend especially after being on twitter more today, and checking out some of the posts about MVP's.
A cool thing that I didn't realize is that if you put in the hash tag #SQLhelp a lot of people will respond to your post. I haven't written this in the forum yet, but I will tomorrow. Tonight I did some pivoting & chapter 5 homework, and got a massage for my ailing shoulder (which is feeling better).
I am going to finish chapter 5 homework tomorrow at lunch hopefully, then another rep of it tomorrow night, chapter 7 homework on Thursday, then back to reviewing 6-10. A full week of reviews for me, followed by nailing down the remaining chapters (writing out cliff notes and reviews) next week. One more week of reviews then the test! That's the plan at least…
Hopefully a month from now I'll have 70-461 under my belt!
Cheers all!
A cool thing that I didn't realize is that if you put in the hash tag #SQLhelp a lot of people will respond to your post. I haven't written this in the forum yet, but I will tomorrow. Tonight I did some pivoting & chapter 5 homework, and got a massage for my ailing shoulder (which is feeling better).
I am going to finish chapter 5 homework tomorrow at lunch hopefully, then another rep of it tomorrow night, chapter 7 homework on Thursday, then back to reviewing 6-10. A full week of reviews for me, followed by nailing down the remaining chapters (writing out cliff notes and reviews) next week. One more week of reviews then the test! That's the plan at least…
Hopefully a month from now I'll have 70-461 under my belt!
Cheers all!
Subscribe to:
Posts (Atom)