Wednesday, May 28, 2014

Update since last try

I have another attempt at the exam coming up this Friday which fine tuning my studies to pass this time.  I need to spend more time on querie optimization and tighten up my knowledge of XML.

I know that I have covered every topic in the book because I have read the book cover to cover.  It makes me think that I have the best broad understanding someone can have about the subject matters but because I don't use SQL for eight hours every day I have a hard time applying the knowledge to specific situations that are mentioned during the test.  Anyways, not passing the second time is not a downer... it's an igniter!!!

Friday, May 23, 2014

625, not enough

So another shot at the test coming up next week.  Scored 625 on my previous attempt, which was close but no cigar.

I felt like I visualized what the test was asking much clearer in terms of the TSQL pieces that needed to be answered.  I need work on query optimization, and creating file objects according to my post test strength/weakness hand out.  When I saw this I knew exactly which questions that I had missed… because I was unsure when I answered them the first time.

That was last Friday morning, over the weekend I went to New Orleans for vacation, after catching up on work when I got back, I'm now working further on my studies.  My plan for studying this week is a lot of flash cards with definitions on the back.

Cheers!

Wednesday, May 14, 2014

Countdown

T minus two days until the test, lots of practice in sandbox today, and lots of XML tomorrow. Cheers all!

Tuesday, May 6, 2014

Chapter 16 written review

Chapter 16 dealt with cursors, and the explanation of cursors vs set based operations (kind of like in chapter 1).  It talked about loops a little more, and how a loop is more of a cursor then a set based operation, and how a set based operation can be more efficient because it can be tuned and optimized.  The things that I got from lesson 1 was that if you are going to use a cursor you must first declare it, open a statement (batch), fetch in a loop, close, then deallocate the cursor.

Lesson 2 was interesting because it talked about temp tables vs table variables.  Interesting because this was discussed at SQL Saturday a week or so ago.  Brent Ozark talked about this in his seminar "think like an engine" where he compared indexes, and brought up the idea of variable tables.  From what I learned about them from him is that they are virtually useless.  One of the lesson review questions gives choices as to when is a table variable useful (choose all), and the answers were: "when the tables are very small and the plan is trivial, when the tables are very small and the plan is nontrivial, when the tables are large and the plan is trivial"; which to me seems like, pretty much never.  Other things I learned that I thought were useful in this lesson were that if you are going to create a temp table that can be seen on a local level you use the "#" sign; such as:
CREATE TABLE #temp1
(
 col1 INT NOT NULL,
 col2 INT NOT NULL
)
But if you wanted to create one and have it be seen on a global level you use '##'; such as:
CREATE TABLE ##glob1
(
 col1 INT NOT NULL,
 col2 INT NOT NULL
)

An interesting chapter again, I will be reading the last chapter maybe tomorrow (maybe not, it's my birthday), then doing some sand box homework in the next few evenings.

Cheers!!!

Monday, May 5, 2014

Chapter 15 Written review

Chapter 15 has three sections about indexes and statistics.

Lesson 1: This lesson deals with indexes, but before it does that it talks about the different ways that data is stored.  A heap is one way that the data is stored, I think of this as a book, with a table of contents at the beginning known as a Index Allocation Map. Here is more info on heaps from the experts, I looked and stared at the diagrams (pictures) in the book for a long time as I read through this stuff.  Then there are balanced trees that have levels of roots, intermediates, and leafs(data).  Think of an upside down tree for this one, with the intermediates being the branches.  The main idea in this lesson after explaining heaps and balanced trees (b-tree?) was that you can create clustered indexes on B-Trees that can seek data (you can only have one per table), and non-clustered indexes that can point to data.  Here are a couple of important things to remember about clustered indexes vs non clustered: Rakesh Blog.  I think of them as a clustered index is great for finding and seeking data, non-clustered can help in finding data in a heap, but it can also be helpful if you need to input data since clustered indexes are not so good at that.  This lesson also dealt with indexed views, which is (as it sounds) a view with an index on it.  These can be useful in ETL transactions.

Lesson 2: Discussed SARG's (search arguments), and introduced the reader to the idea that you can use indexes when querying data and it will find your data faster.  No brainer right?  Well it also explains, when the optimizer decides to use a query or not, and where it is a good idea to put an index.  Simply put, it's best to have a clustered index on a column that is a unique identifier, or INT, make sure that it is short though so that it can easily seeked.  This lesson discusses the Where clause too, and how it can be a powerful tool in using an index during a query which makes sense if you remember the logical way that SQL will execute a query from, WHERE, group by, having, select, order by.  I will add, that of course it's not a good idea to have a function on your where clause because when a calculation is performed SQL can no longer seek to a value, it would need to perform the calculation then check each row whether or not that value matches, which is a scan by definition.  This lesson does not talk about query hints which was a little disappointing to me, I think that's in chapter 17.

Lesson 3:  Was interesting because it talked about statistics, which from what I understand is a HUGE topic.  From what I've gathered I don't think I would ever mess with the statistics generated by SQL.  The book tries to give examples of when you would want to manage or create your own statistics, but geez.. I find that VERY advanced at least for my level.  The basics of statistics is that SQL is always creating and updating statistics on the distribution of key values.

All in all this was a very informative chapters, the index stuff was good but I enjoyed learning about the heap & B-trees.

Cheers!

Saturday, May 3, 2014

Chapter 14 Written Review

Chapter 14 dealt with query optimization which is a really cool topic!  I've listened to one particular podcast probably 5 times about query optimization (podcast is SQL Down Under, and the guest is Grant Fritchey).

Lesson 1: Query optimization is exactly as it sounds, it's trying to find the most optimal way of executing code.  The SQL optimizer estimates a cost on the hardware when it begins a transaction.  The cost part is only one part of about five that the SQL engine will execute before returning the result set of the query.  Here are the 5 pieces in order: T-SQL, Parsing, Binding, Optimization, Execution... google these or look at page 520 in the kit to understand these further.  SQL has a few tools to help analyze queries, Extended Events, SQL Trace, and SQL Profiler.  Note that SQL Trace, and Profiler will be deprecated in future versions of SQL, so it's better to use Extended Events.

Lesson 2: Talks about execution plans, and how to read graphical plans.  Make sure to read them from right to left, and the top down.  As I learned in the podcast it's important to remember that actual execution plans, and estimated execution plans, are still all only ESTIMATED.

Lesson 3: Was about dynamic management objects (DMOs).  DMO's are another tool for finding queries that don't run optimally.  There are more then 130 DMOs available in SS2012.  Important DMO's to remember at ones that deal with SQL Server operating system, execution-related DMOs, and Index-related DMOs.


Credit: Querying Microsoft SQL Server 2012; Itzik Ben-Gan, Dejan Sarka, Ron Talmage

Friday, May 2, 2014

Just a little recap Ch. 12 & 13 written review

So beginning with chapter 12;

Lesson 1 was about transaction commands, how SQL handles transactions, how the user is able to see where SQL is in a transaction, and locking for readers and writers.  It also talked about rollbacks, which is essentially not committing a transaction.  Lots of lingo in this chapter, hence the flash cards that I made for myself.

Lesson 2 dealt with errors (raising errors, and throwing errors), one interesting note that I found out was that Microsoft is suggesting to use the THROW statement instead of RAISERROR in new applications, and RAISERROR can't be used in SQL Server's 2014's natively compiled stored procedures.  It also talked about XACT_ABORT (note the XACT part means 'transaction'), as you might imagine this command aborts a transaction.  Lastly, I think of Try/Catch kind of like an IF statement, because if an error occurs during the try part of transaction control is passed to the catch part of the batch.

Lesson 3 Was about Dynamic SQL, and SQL injection.  I had a post on SSC a while back when I first started studying that someone commented that I needed to use dynnamic SQL, and this example from the book is kind of what I was getting at in my post:
USE TSQL2012;
GO
DECLARE @SQLString AS NVARCHAR(4000), @address AS NVARCHAR(60);
SET @SQLString = N'
SELECT custid, companyname, contactname, contacttitle, address
FROM [Sales].[Customers]
WHERE address = @address';
SET @address = N'5678 rue de l''Abbaye';
EXEC sp_executesql
@statement = @SQLString
, @params = N'@address NVARCHAR(60)'
, @address = @address;

In this example, we declare a parameter for sp_executesql command that is actually some SQL code... doing this is the best way to protect against SQL injection.  

Chapter 13

Lesson 1 is all about my fav, stored procedures!  This is simple, if you have a query like:
SELECT name
FROM customers
WHERE firstname = 'Bob';
You can create a procedure out of this, and interchange 'Bob' with a different parameter.  The code would look something like this:
USE TSQL
IF OBJECT_ID('name','P') IS NOT NULL DROP PROC findbyname
CREATE PROC findbyname
   @custname AS VARCHAR(30)
AS
BEGIN
SELECT name
FROM customers
WHERE firstname = @custname
RETURN
END;
--Now you can execute the procedure by using EXECUTE, or EXEC for short:
EXEC findbyname
   @custname = 'Bob' --Or steve, or whatever name you want

This lesson also talks about branching logics for SQL code, these branching logics are statements such as: IF/ELSE, WHILE, WAITFOR, GOTO, RETURN.  When I read through this part of the book I had to remind myself that these branching logics are great, but remember that SQL is not an object oriented language, so when you start running this while loops and what not, it's going to kill your performance.  I use a while loops a bunch for the development work that I do, but that's a totally different animal.  Just a thought on this.  Also remember that procs (stored procedures) can all other procs... again the code can get messy.

Lesson 2 dealt with Triggers.  Triggers are just like procs, but there needs to be something has to happen in order for them to be executed.  SQL supports triggers for two different kinds of events, a data manipulation event (insert, updates, and deletes), and a data definition event (creating objects).

Lesson 3 is about user defined functions.  I found this great page on stack over flow a while back that I bookmarked that talked about the differences between user defined functions, and procs... make sure to review this: difference between udf's and procs

Sources: sqlhints.comstackoverflow, Querying Microsoft SQL Server 2012; Itzik Ben-Gan, Dejan Sarka, Ron Talmage

Thursday, May 1, 2014

May 1st, Date Set!

After a long while, I'm making another post to update myself where I am currently.  A while back I posted a picture of some flash cards that I was using for chapter 12 I believe (the heart of SQL transactions).  Since then I've read through chapter 13, and am almost done with 14.  Stored procedures, UDF's were some of the things that were covered during these chapters, and as I alluded too before, what I feel like is my bread and butter as far as writing TSQL goes.

As far as the test taking goes, I'm so happy that a lot of the folks on born to learn have taken and completed the test, it totally inspires me to do the same!  I know that the training kit allows for "self paced" learning, but I feel like I've been studying for years!  Partially it's me, not thinking that I understand the material well enough, and want to have an even firmer handle on it then I do, and part of it is the fact that in my daily job (which is somewhat seasonal, spring being the busiest season) I've been massively slammed.  I'm learning all of the SQL material on my own which forces me to prioritize my daily routine.  Things as simple as getting hair cut have been put off recently (which I will be doing today)!

None the less I have set a date to take the test May 16th!  My plan of action is to finish reading the book by next Friday, and make flash cards for the conceptual pieces of it.  Then work hard on sand box work over the course of the following week until I take the test Friday the 16th.  I honestly think that I could pass the test today, but ensure that I WILL pass it the 16th I'm going to do a lot of reviewing topics, and scripting TSQL.

I think it would be healthy for me to spend time writing down what I've learned in the last few chapters since I last posted, so, that would be 11, 12, 13, and now 14 so that I have them for my own reference as I review in the next two weeks.

Cheers!

Saturday, April 26, 2014

SQL Saturday Chicago!

What a great turn out for SQL Saturday Chicago! Attended Brent Ozark's session in the morning, a brilliant, funny, entertaining class about how to think the sql engine! Here's a pic from the class: 

Friday, April 25, 2014

Post Easter Weekend Transaction Level and Isolation

Ok, so I won't lie... chapter 12 was about 70 pages long, and I only read it twice.  This chapter along with chapter 6 are very conceptual, which make them a slog to get through.  I enjoy the deep thinking involved with this, but it is difficult to keep up with every idea, and execution.  I'll be working through the practice exercises in the book this evening, and will try to remember the lingo for the test (QUE THE FLASH CARDS!)


It was largely about the idea of transactions.  A transaction is a process of getting things done... that's a very dumbed down definition but either a transaction is completed successfully, or not.  In SQL there are different transaction levels that can be set before a running code.  I'm going to try to explain the five transaction levels introduced in the book in my own words:

Read Committed: The default Isolation level for SQL Server. A query can only read data that has been committed
Read Uncommitted: Removes shared locks in select statement, reads uncommitted data; DIRTY READS!
Read Uncommitted Snapshot: (RCSI) is the default Iso level for Azure, stores original data in Tempdb so that it can be reread later
Repeatable Read: data in a transaction can be read later in the transaction, updates and deletes of rows that are selected are prevented, but it can still read new rows PHANTOM READS
Snapshot: Just like RCSI, cannot see phantom reads
Serializable: Highest Iso level, all reads are repeatable, new rows are not allowed in underlying tables

SQL Saturday Chicago begins tomorrow!

After rearranging my schedule I'll be attending SQL Saturday 291... CHICAGO STYLE!  I'm excited for this because with a lot of stuff going on right now, I wouldn't say that my studies have been lacking, but more or less stalled.  I think that's a perception thing on my part because I think about my progression towards passing this exam as an exercise of 'doing' but the last few chapters have been more or less 'absorbing'.  Understanding concepts, nailing down definitions, but not the 'rada-tap-tap' on the keyboard that I'm used to.

I've given thought to reading the rest of the book through one time, then going back and revisiting things that I either need to learn more about or need to remind myself of.

As of now, I'm reading through chapter 13 that I understand fairly well.  Implementing stored procs is really my bread and butter on this exam since that is mostly what my development background has taught me how to do.  A quick glance through this, and then on to chapter 14 in which I'm very eager to learn about query performance tuning.

Cheers all, I'll take a pic of SQL Saturday tomorrow morning!

Friday, April 18, 2014

Insert, Update, Delete Sandbox fun

After reading through merging, inserting, updating, and deleting I performed the exercises in the book, but it didn't stick in my head as well I thought it would.  For that reason I began to play on my own using the database that I had created in the past from the contacts in my phone.  Using the phone number as the primary key I had two tables (the old table, and a new table) and I began to update, and insert new contacts on the old table.

Sandbox practice is a great way to learn, and can be more relative in the real world as well.  I will post the DDL from my update statements when I get back to my home cpu...

Cheers!

Monday, April 14, 2014

Chapter 11 practice exercises from book.. do over again!

After reading through chapter 11 a few times and getting my head around the ideas I started doing the exercises in the book tonight... all I can say are they are great!

I think this is about the third time that I have written this, but the book does a wonderful job keeping things simple at first then expanding on the ideas.  Chapter 11's exercises def prove this, and I have noted to my self (by way of this post in this blog) that I will go back and do these exercises again before test time.

On to chapter 12!

Saturday, April 12, 2014

Merge read, not written...

I said that I would write about merge last night, but had family over so was a bit busy cooking/cleaning last night.  Perhaps I'll have time to write down my thoughts on merge tonight.

I did however read chapter 11 yesterday (the first two lessons).  I always read through a chapter once quickly knowing that I won't absorb every piece of information, but instead exposing myself to the information that the chapter involves.  This seems to be the best way for me to learn new material... read through once to get the lay of the land, let it sink in for a day or two, then go back in and read through it again and do the examples provided by the book, then if needed go through it a third time.  If done correctly this can be completed in 5 days to a week.  Remembering the content is the more challenging part of it in which I need to go back at least once a week and review what the chapter covered.  Reviewing and doing homework is where my attention has been recently as I try to learn new content and keep the maintain my performance with the old.

Anywho, yes, more chapter 11 and merge statements tonight (fingers crossed)!

UPDATE, now written and updated in previous post

Friday, April 11, 2014

Ch. 11 Sequence/Identity and Merge

Chapter 11 lessons one and two deal with sequence/identity and merge statements respectfully.

I've learned about these in the past, but it was great to review these ideas again learn a few new things about them.  For example, the indentity property has been around in SQL for sometime now, but in 2012 Sequence was introduced.  There are a few key differences between sequence and identity which I will get to in a minute, but first it's important to understand what these properties are used for.

Identity and Sequence both automatically assign a value to the column in a table once the row is inserted.  Only one column in a table can have an identity property.

As of now the most common way that I see the identity property used has been in practice exercises such as my example of finding the sum of all rows in the table back in my chapter 5 homework (I remember this because I just did the homework Tuesday :)).. In that example you first had to create a table, then insert values; here's how that was done:

USE TestDB
IF OBJECT_ID('sumquestion') IS NOT NULL DROP TABLE sumquestion;
CREATE TABLE sumquestion (
ID INT IDENTITY (1,1)
Name VARCHAR(10)
);
INSERT INTO sumquestion (Name)
VALUES ('Alpha'), ('Beta'), ('Beta'), ('Charlie'), ('Charlie'), ('Charlie');
SELECT * FROM sumquestion

Notice that when I inserted values into table sumquestion that I didn't have to insert for ID, just name, because the IDENTITY property on the ID column automatically assigns a value for the row.  Placing (1,1) next to INT tells the operator to start at number 1, and increase each value by 1 (this is the default).

Important items that are pointed out in this section are:

  • There are differences when using delete and truncate and the identity property.  Using delete in the where clause does not affect the value of identity, however truncate does.  
  • Identity does not guarantee the uniqueness of values.. if you try to insert a value into a table and the current seed of identity is 5, and the insert fails, this insert would have taken the value 6.. but because it failed, 6 will be skipped.  Try to make another insert and it will insert 7, your identity column will now show 4, 5, 7, 8, etc...
  • The following functions help you with identity: SCOPE_IDENTITY returns the last identity value in the session in the current scope.  @@IDENTITY returns the last identity value generated in your session regardless of scope.  IDENT_CURRENT accepts a table as input and returns the last identity value generate in the input table regardless of session
As far as SEQUENCE goes, it's important to remember that sequence is actually an object in the DB, not just a function.  Here are the key differences between IDENTITY and SEQUENCE according to the book:
  • Identity is tied to a particular column in your table so you can't remove the property and add it to something else
  • Identity is table specific, sequence is not (because it's a file object)
  • With identity you need to create a row before the value is assigned, this is not true with sequence
  • You can't update the identity column
  • Identity can't support cycling (meaning once it runs out of values it can't go back to the start)
  • Truncate resets the identity value
The way that I remember the differences between the two is that first off SEQUENCE is a newer to SQL so so it comes with more customization.  For example with sequence you're able to tell it to start at a minimum value, how much to increment by, a max value, whether it's allowed to cycle or not cycle, and a starting value.  An example of the syntax of SEQUENCE is something like this:

CREATE SEQUENCE Sales.SequOrderIDs AS INT
   MINVALUE 1
   CYCLE;

--

So, as far as merge statement, it's just as it seems in the spoken word… it deals with moving data from one table to another based on certain clauses.  Here is how tech net describes merge which I believe is very concise and simple:

The MERGE syntax consists of five primary clauses:
  • The MERGE clause specifies the table or view that is the target of the insert, update, or delete operations.
  • The USING clause specifies the data source being joined with the target.
  • The ON clause specifies the join conditions that determine where the target and source match.
  • The WHEN clauses (WHEN MATCHED, WHEN NOT MATCHED BY TARGET, and WHEN NOT MATCHED BY SOURCE) specify the actions to take based on the results of the ON clause and any additional search criteria specified in the WHEN clauses.
  • The OUTPUT clause returns a row for each row in the target that is inserted, updated, or deleted.
Credit: Querying Microsoft SQL 2012; Itzik Ben-Gan, Dejan Sarka, Ron Talmage; http://technet.microsoft.com/en-us/library/bb522522(v=sql.105).aspx

Tuesday, April 8, 2014

Lunch Time HW 4-8 As planned

Here is my lunch time project...

/*USE TSQL2012
SELECT categoryid, MIN(unitprice) AS mn
FROM production.products
GROUP BY categoryid
--
WITH cte AS(
SELECT categoryid, MIN(unitprice) AS mn
FROM production.products
GROUP BY categoryid)
SELECT p.productid, p.productname
FROM Production.Products AS p
INNER JOIN cte AS c
ON c.categoryid = p.categoryid
AND p.unitprice = c.mn
--WHERE p.unitprice = c.mn
--
SELECT s.supplierid, p.productname, p.productid, p.unitprice
FROM production.Suppliers AS s
INNER JOIN production.Products AS p
ON p.supplierid = s.supplierid
WHERE s.country LIKE 'Japan'
--
SELECT c.custid, c.companyname, o.orderdate
FROM sales.Customers AS c
RIGHT OUTER JOIN sales.Orders AS o
ON o.custid = c.custid
WHERE o.orderdate = '20070212'
--
IF OBJECT_ID('anyname','if') IS NOT NULL DROP FUNCTION anyname
CREATE FUNCTION anyname (@empid, INT)
RETURN TABLE AS();
--
SELECT TOP (2) productid, productname, supplierid, MIN(unitprice)
FROM Production.Products
WHERE supplierid = 1
GROUP BY productid, productname, supplierid
ORDER BY MIN(unitprice) ASC
*/
--
WITH cte AS(
SELECT TOP (2) productid, productname, supplierid, MIN(unitprice) AS mn
FROM Production.Products
WHERE supplierid = 1
GROUP BY productid, productname, supplierid
ORDER BY MIN(unitprice) ASC)
SELECT s.country, s.supplierid, A.*
FROM production.Suppliers AS s
CROSS APPLY cte AS A
WHERE s.country = 'Japan'

Monday, April 7, 2014

Updating HW sets

I haven't written for a while because I continue to work through the homework sets, specifically the ones that I posted on chapters 6-10 (even though I haven't even written HW for chapters 9 and 10 at this point).  My main focus has been on the XML part of it, because while I previously had been working with a lot of element centric xml, the HW that I posted contained an element/attribute combination.  This totally through me for a loop... I was having a hard time remember how to find a value in XML that was an attribute, but the more I worked on it I began to remember that attributes are the easy ones because they are always in quotes, and can be queried by "@"!  Once I figured that out everything made a lot more sense, but it did take me a number of trips through the book, as well as a post on SSC.

Anyways, that's my update, more posts as soon as I get through my self prescribed HW set!  I think I'll spend time tomorrow during the day doing chapter 4, I'm also going to hold my self accountable by posting the HW that I do from chapter 4 on here tomorrow afternoon.

Cheers!

Sunday, April 6, 2014

Answers 6-10

Here's what I've got so far, working on the rest of Q's and A's

1) In my own words, stopwords are just noisy nothing words that don't need to be searched, and can be looked over.  If I had a sentence like this (which I hear from way to many people), "Um, what time are the, um, Bears, um playing Sunday?" and I had to search that sentence on a computer, I could save myself a lot of room by telling my code, every time you see "um" skip it!.. At least that's my understanding of it...

2)  LIKE and CONTAINS are similar but against large text scans CONTAINS works better (faster).  LIKE can't perform scans on formatted binary data, LIKE performs a pattern scan of each table, CONTAINS "appl(ies) specific transformations at index and query time", it's smarter and goes faster for big stuff.

3)  Stemming is a fancy way of saying a word has a different tense... Past tense, future tense, etc (are there any others?)  So a word like "drink" would be the Stemmer for, drank, drunk, drinking, drinks.  This goes hand in hand with the last question about CONTAINTS() because CONTAINS can find the inflection of drink (meaning CONTAINS can find drank, drunk, drinking, drinks when you just tell it to find "drink").  Interestingly enough CONTAINS can also find a synonym for another word, meaning if I type in the word car, it could find the word "auto" or "automobile" if those words are in the Thesaurus that it uses (and you can edit the Thesaurus if you want).

Again with the XML....

4)  Ok, so the main differences in the different types are that they each get laid out differently... recognizing how the nodes look, and their attributes is important, and something that I should be able to visualize... in case I can't here is some data that can help me:

IF OBJECT_ID('XMLForHomework') IS NOT NULL DROP TABLE XMLForHomework
CREATE TABLE XMLForHomework
(
ID INT PRIMARY KEY IDENTITY (1,1),
FirstName varchar(20),
LastName varchar(20),
FavoriteTeam varchar(10)
)
-- Add some stuff
INSERT INTO XMLForHomework VALUES
('Ben', 'Jamin', 'Tribe'),
('Chris', 'Smith', 'Sox'),
('Beth', 'Smith', 'Cubs'),
('Your', 'Mom', 'Yanks'),
('Your', 'Dad', 'Mets')
--
SELECT * FROM XMLForHomework FOR XML AUTO
SELECT * FROM XMLForHomework FOR XML RAW
SELECT * FROM XMLForHomework FOR XML RAW, ELEMENTS
SELECT * FROM XMLForHomework FOR XML PATH('Path')
--
DROP TABLE XMLForHomework

After looking at the different outcomes of the XML I should be able to tell the differences between them.  My understanding is that you may need different forms of XML in different situations, and for different applications.

5)  So, check this out, I kept on getting frustrated with this question because I wrote the first batch of code, and kept on getting an error, then fixed my "bugs" with the second line of code.. can you find the 3 differences between them?  The correct answer is the bottom query.

USE AdventureWorks2012
WITH XMLNAMESPACE(
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/Resume' AS ns)
SELECT [Resume].value('(/ns:Resume/ns:EMail)', 'nvarchar(50)') AS emal
FROM HumanResources.JobCandidate;
--
WITH XMLNAMESPACES(
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/Resume' AS ns)
SELECT [Resume].value('(/ns:Resume/ns:EMail)[1]','varchar(50)') AS emal
FROM HumanResources.JobCandidate;

6)  WITH XMLNAMESPACES(
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/Resume' AS ns)
SELECT [Resume].value('(/ns:Resume/ns:Name/ns:Name.First)[1]', 'varchar(50)') AS firstname,
[Resume].value('(/ns:Resume/ns:Name/ns:Name.Last)[1]', 'varchar(50)') AS lastname
FROM HumanResources.JobCandidate
WHERE
[Resume].exist('/ns:Resume/ns:Education/ns:Edu.Location/ns:Location/ns:Loc.State[.="IA "]')= 1

7)
a. So a note on this one, I got really confused when I started working with attribute centric XML because most of the work I had done up to this point had been in element centric.  I actually had to post this question on SSC to obtain a hint on the answer.

SELECT
@x.value('(root/production.Categories[@categoryname="Beer"]/@categoryid)[1]','INT') AS catid,
@x.value('(root/production.Categories[@categoryname="Beer"]/@categoryname)[1]', 'NVARCHAR(50)') AS catname


b. This is another tough one because it's important how you define "everything" and I wasn't very specific... So I meant it as how do you return the entire XML document? Which would be this:

SELECT @x.query('*') AS everything

c. The key to problem c is to shred everything and make a big table first with all of the data in it (a cte) then query that table, here's what that would look like:

WITH CTE AS(
SELECT 
categoryid = Foo.value('(@categoryid)','NVARCHAR(250)'),
descrip = Foo.value('(@description)','NVARCHAR(250)'),
categoryname = Foo.value('(@categoryname)','NVARCHAR(250)')
FROM @x.nodes('/root/production.Categories') AS Tbl(Foo)
)
SELECT * FROM CTE
WHERE categoryid BETWEEN 9 AND 10

8) There are two types of indexes for XML a primary index, then after you have a primary index you can have a secondary index that has three different flavors: Path, Value, and Property.  The primary index "contains a shredded persisted representation of the XML values"... basically for each node a row is created, then you can search the rows.  Path in the secondary index speeds up queries that use exist() or value().  Value helps if you use queries that are value based or have wildcards.  Property is useful for queries that "retrieve one or more values from individual XML instances by using the value() method."

9) The differences between element centric and attribute centric XML in my head are first off the way that they look... element centric uses a lot of nodes, which are things that look like this <root> value</root>, and attribute centric looks like this root = "value"... or something to that affect.  Either are fine, however with element centric XML you may want to describe a namespace.

10-12) /*USE TestDB
IF OBJECT_ID('QuestionTen') IS NOT NULL DROP TABLE QuestionTen
--
CREATE TABLE QuestionTen(
PlayerID INT NOT NULL,
FirstName VARCHAR(25) NULL)
--
ALTER TABLE QuestionTen
ADD Lastname VARCHAR(25) NOT NULL
--
ALTER TABLE QuestionTen
ADD CONSTRAINT PK_PlayerID PRIMARY KEY(PlayerID)*/

13) Well, with a primary key you can only have one per table, the primary key can not have a NULL value, and it must be unique.  A unique key is just like it sounds, it is unique but it can be NULL ( I think there can only be one null unique key per table though, because then it would still be unique).

14) Sticking with table "ChapterTen" here is what adding a default constraint would look like:

ALTER TABLE QuestionTen
ADD AddedDate date DEFAULT GETDATE()

15) Again, sticking with table "ChapterTen" here is what adding a check constraint would look like:

ALTER TABLE QuestionTen
ADD CONSTRAINT Chk_FirstName CHECK (Firstname <> 'Bob')
--This Insert will error out because FirstName = Bob
INSERT INTO dbo.QuestionTen (PlayerID, FirstName, LastName)
VALUES (1, 'Bob', 'Smith') 

Thursday, April 3, 2014

Going through homework and reviews with limited time

The last few days I've been scrolling through my previous posts reviewing things such as pivoting, ordering, windowing, and the like in an effort to refresh my brain on the syntax for T-SQL.  One of the questions in chapter 5 I think it was on grouping has to do with creating a table then inputting data like 'alpha', 'beta', etc which I think is good practice in itself... how to create a table, because creating views is similar although you have a few more options with a view.

I'm getting antsy to take the exam again but I know i need to be patient and work through what I've done so far.  Time is my biggest obstacle right now, but I believe that if something is important to you, you'll make time for it.  This is important to me, so working on homework during lunch, or at night is my best option right now.  That's why I haven't been doing many posts on here...

Ok back to working on Chapter 5 "all lesson sets"

Cheers!

Tuesday, April 1, 2014

Momentum from Saturday carrying me through exam & #sqlhelp

I'm still feeling the Saturday momentum that I got this weekend especially after being on twitter more today, and checking out some of the posts about MVP's.

A cool thing that I didn't realize is that if you put in the hash tag #SQLhelp a lot of people will respond to your post.  I haven't written this in the forum yet, but I will tomorrow.  Tonight I did some pivoting & chapter 5 homework, and got a massage for my ailing shoulder (which is feeling better).

I am going to finish chapter 5 homework tomorrow at lunch hopefully, then another rep of it tomorrow night, chapter 7 homework on Thursday, then back to reviewing 6-10.  A full week of reviews for me, followed by nailing down the remaining chapters (writing out cliff notes and reviews) next week.  One more week of reviews then the test!  That's the plan at least…

Hopefully a month from now I'll have 70-461 under my belt!

Cheers all!

Saturday, March 29, 2014

First SQL Saturday

Currently  attending my first sqlsaturday in Madison Wisconsin... Will have a trip report at the end of the day, first session is the picture below... Excited! Wondering if I will grasp things today or if it all will go saling over my head... Cheers!

Friday, March 28, 2014

Learning new stuff while not forgetting old stuff

Well I posted the homework for chapters 6-8ish, and worked on it once or twice.  Then the other day I decided to go back and clean up a few posts in my blog as to not confuse anyone that might read the blog and in doing so I started to review some of the homework that I assigned myself.  When I read the questions and tried to construct a query that would answer the questions in SQL I had a feeling of writers block; like ok, how do I start this thing?  Do I need a CTE? What am I grouping?  What tables am I using?  How do I cross apply, and what does that do again?

I think it's virtually impossible to not relapse on the things you've learned without looking at them on a daily basis, but thankfully when I felt the writers block I was able to go back and review some notes to get back on track as to how I can construct a sufficient query.  None the less, it did take time for me to look up notes which I will not have the luxury of doing in a test environment.

Realizing that the initial things that I learned had gotten fuzzy I've been working through my homework sets from chapters 3, 4, 5.  Reading the questions slowly, and taking my time has helped me to refresh my memory of these queries.

Learning SQL for this exam is tough if you're new to SQL because there are many ways to go about learning it.  It's not as if you first learn the SELECT statement in chapter one, and the ORDER BY in the last chapter (if that makes sense).  You learn principle concepts that are simple at first, then shift, and morph to become increasingly more difficult.

So with that all being said, it can be frustrating trying to learn something new and not forgetting the old.  The best remedy I've found for blending the two is constant application.  I've already told myself that I'm not going to take the exam again until I can complete every homework question in my blog without looking at notes.  I then will know that I'm ready…

Cheers to those on the warpath with me...

Pop Quiz

Well since I've spent a lot of time recently on the middle chapters of the book, I decided to review a couple old topics, hence the Pop Quiz title.  Just two questions:


1)  In Chapter 5 I had a question like this in the homework that I used from stack overflow:

USE TESTDB GO
CREATE TABLE chapter5 ( ID INT, Name VARCHAR(10));
USE TestDB
INSERT INTO chapter5 (ID, Name) VALUES (1, 'Alpha'), (2, 'Beta'), (3, 'Beta'), (4, 'Beta'), (5, 'Charlie'), (6, 'Charlie')

Here's the query itself
USE TestDB

SELECT NAME, COUNT(ID) AS idcoount
FROM chapter5
WHERE ID IS NOT NULL
GROUP BY Name
UNION ALL
SELECT 'SUM', COUNT(ID)
FROM chapter5

--
Explain why this invokes a conversion error:
USE TestDB
SELECT COUNT(ID) AS idcoount, Name
FROM chapter5
WHERE ID IS NOT NULL
GROUP BY Name
UNION ALL
SELECT 'SUM', COUNT(ID)
FROM chapter5

2)  Write a query that finds locations (region, and city) that are common (belong in both) between hr.employees and sales.customers using TSQL2012

That's all for now

Monday, March 24, 2014

Homework Chapters 6-10

I'm going to attempt to create a homework set that covers things I've learned in chapters 6 through 10... keep in mind these are just exercises, it's important to know how to read and write this stuff, but it's also import to understand them deeper *cough* *cough* so that you're not lost in the multiple choice portion of the test.

Chapter 6
1. What are stopwords?
2. What is the difference between LIKE and CONTAINS in the WHERE clause?
3. What is a Stemmer, and what does it do?

Chapter 7 (The Big Dog)
4. What's the difference between FOR XML AUTO, FOR XML PATH, and FOR XML RAW?
5. Use humanresources.jobcandidate in AdventureWords2012 and display email address's of candidates (EMail)
6. Use humanresources.jobcandidate in AdventureWords2012 and display first name and last name of people who's education was in the state of Iowa (IA)... (ns:Resume/ns:Education/ns:Edu.Location/ns:Location/ns:Loc.State)
7. Use this code:
 DECLARE @X XML
 SET @X = '<root>
<production.Categories categoryid="1" categoryname="Beverages" description="Soft drinks, coffees, teas, beers, and ales" />
<production.Categories categoryid="2" categoryname="Condiments" description="Sweet and savory sauces, relishes, spreads, and seasonings" />
<production.Categories categoryid="3" categoryname="Confections" description="Desserts, candies, and sweet breads" />
<production.Categories categoryid="4" categoryname="Dairy Products" description="Cheeses" />
<production.Categories categoryid="5" categoryname="Grains/Cereals" description="Breads, crackers, pasta, and cereal" />
<production.Categories categoryid="6" categoryname="Meat/Poultry" description="Prepared meats" />
<production.Categories categoryid="7" categoryname="Produce" description="Dried fruit and bean curd" />
<production.Categories categoryid="8" categoryname="Seafood" description="Seaweed and fish" />
<production.Categories categoryid="9" categoryname="Beer" description="Budweiser" />
<production.Categories categoryid="10" categoryname="liquor" description="Captain Morgan" />
</root>'

Query @X to:
         A. Return the categoryid of categoryname "Beer"
         B. Return everything from @x
         C. Shread categoryname and description from categoryid 9 and 10
8. What kind of XML indexes can be created?  page 260
9. What's the difference between element centric XML, and attribute centric?

Chapter 8
10. Create a table called QuestionTen in TESTDB.  Column one called "PlayerID" should be an integer that is 5 digits, this column can't be NULL.  Column two called "FirstName" as a variable character up to 25 characters long, this can be NULL.
11. Alter table "QuestionTen" and add "LastName" as variable character 25, not null
12. Add primary key constraint to table "QuestionTen" on the PlayerID column.
13. Discuss what the difference is between a primary key constraint, and a unique key constraint
14. What is a default constraint, what's the syntax?
15. What is a check constraint, what's the syntax?

That should keep me busy, along with not forgetting stuff from chapters 3-5.... Cheers!

Chapter 9
COMING SOON

Chapter 10
COMING SOON

Sunday, March 23, 2014

Chapters 6-10, Life things

Well since there are about 17 chapters in this book I am going to try to split up the homework section by chapters of 5.  Previously I wrote a homework assignment that incorporated chapters 1-5, I'm going to do the same thing in my next post about chapters 6 through 10...  Chapter 7 (XML) will be favored as I write out this homework, but creating views, Inline Functions, creating/altering/deleting objects, insert/update/deleting rows of data, and adding constraints, will be in there as well.  Look for this homework in the next few days.

Tonight I'm going to do a few homework assignments from chapter 3, 4, and 5... maybe one from each and then relax.  Things have been crazy since my setback/injury trying to play catch up and having little time to get on blogger.  I always try to read the chapter I'm working on once through first, then go back and look at the lessons individually to help me absorb the material which leads to less posts because the time that I have to blog is used for reading.  Also, these last few chapters have been different then chapters 3, 4, and 5 due to the fact that there is still a lot of content, but the ability to work out the content with hours in TSQL seems to be less specific... maybe I'm not being creative enough.

Anyways, I've gotta "keep on keeping on" to get through this book.... enough with any excuses that I have.  I mentioned in the forums that I would like to test out this Friday (3/28), but with roughly 6 chapters left to go that's a very tall task.  None the less I'll be inspired after SQL Saturday #287 next weekend.. I'll for sure make a post from there.

Cheers... stay on the war path!

UPDATE, here is the homework I did today, mixed in a few different homework assignments, I'm only posting the answers, because I want to hold my self accountable for doing the homework, and not just saying that I did it....:

SELECT DATEDIFF(MONTH,MIN(hiredate),MAX(hiredate))
FROM hr.Employees
--
WITH CTE AS (SELECT orderid, custid, shippeddate,
DATEDIFF(DAY,shippeddate,GETDATE()) AS diff
FROM sales.orders)
SELECT orderid, custid, shippeddate,
'since shipped' =
CASE
WHEN diff > 1 THEN diff
WHEN diff IS NULL THEN '-999999'
END
FROM CTE
--
SELECT custid, orderid,
AVG(val) OVER(PARTITION BY custid
ORDER BY orderdate, orderid
ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) AS lstfour
FROM sales.OrderValues
--
SELECT custid, orderdate, orderid, SUM(val) OVER(PARTITION BY custid
ORDER BY orderdate, orderid
ROWS UNBOUNDED PRECEDING) AS runtot
FROM sales.OrderValues
--
WITH CTE AS(
SELECT orderid, shipperid, freight,
ROW_NUMBER() OVER(PARTITION BY shipperid
ORDER BY freight DESC, orderid) AS rankpership
FROM sales.Orders)
SELECT *
FROM CTE
WHERE rankpership <= 3
--
SELECT o.custid, c.city, c.country, COUNT(*) AS Swedeorders
FROM Sales.orders AS o
INNER JOIN Sales.Customers AS c
ON o.custid = c.custid
WHERE c.country LIKE 'Sweden'
GROUP BY GROUPING SETS((o.custid, c.city, c.country), ())

Thursday, March 20, 2014

Chapter 10

Hitting double digits in chapter #'s... Woot!

This chapter is about inserting, updating, and deleting. Topics that have many real world uses... I'll be paying keen attention to this chapter and examples.

-Cheers!

Wednesday, March 19, 2014

Chapter 9 Suggested Prac. 2... Synonyms!

So here is the code from Chapter 9's second suggested practice.

--First off I check to see if there are any procedures on my system to begin with
SELECT name
FROM sys.procedures
-- Then I create an insert procedure on production.categories like the book suggests
CREATE PROCEDURE Production.CategoriesINSERT
@categoryname nvarchar(15),
@description nvarchar(200)
AS
INSERT INTO production.categories VALUES
(@categoryname, @description)
--I check to see if it works
EXEC production.CategoriesINSERT 'Beer','Budweiser'
--It worked!  Then I create a Synonym for it
CREATE SYNONYM categoryinsert FOR Production.CategoriesINSERT
--Check to see if it worked again
EXEC categoryinsert 'liquor', 'Captain Morgan'
--Now run it without any values and observe the error
EXEC categoryinsert
--The error refers to the procedure, not the synonym because
--the synonym simply points to the procedure


Appropriate timing

I'm on the SQL server central daily email blast, today the topic was Views!

Here is a link to the page:


I thought it was appropriate!

One other note, as I was going back and doing homework some of the homework I assigned myself I realized that Blogger was auto correcting my spelling of 'OrderID' to 'Ordered'... it's a small annoying thing that I though I should point out.. I'm correcting it when I see it.  Also I revised the answer on question five of chapter 5... it was wrong

Cheers

Tuesday, March 18, 2014

Chapter 9 Suggested Practice 1

This chapter dealt with Views, Inline Functions, and Synonyms

The first practice asks you to create a view with CHECK OPTION, then try to insert something into the view, but because the view was designed with CHECK OPTION it fails and is terminated.  Take the CHECK OPTION off and it works.

So just to be clear, I think of a view a lot like a CTE that you can always access.  There are a few options with a view though such as with schemabinding, with encryption, with metadata, and with check option.  It might not matter, but something to remember would be that you have to specify the "with check option" after the "where" clause in the Select statement.

So this is the code that I used in this practice:

IF OBJECT_ID(N'chnine','V') IS NOT NULL DROP VIEW chnine
GO
CREATE VIEW chnine
AS
SELECT * FROM hr.Employees
WHERE country = 'USA'
WITH CHECK OPTION;
--
INSERT INTO dbo.chnine (lastname,firstname,title,titleofcourtesy,birthdate,hiredate,
address,city, region, postalcode, country, phone) VALUES('smith','jim','ceo','mr.',
'19980216','20080201','1234elm','Dallas','TX','23201','Canada','2065511253')
--
SELECT * FROM chnine
--
DELETE FROM chnine
WHERE lastname = 'smith'
--

You shouldn't run all of this code at once, but highlight it and run it separately.  I'm working on the second piece of this chapter now, will post results for the Synonyms project next!

Cheers!

Sunday, March 16, 2014

Sudoku & TSQL

Have spent some time today looking over some basic queries from chapter 3, 4, and 5.  My brain starts firing and thinking clearer the more problems that I try to work out.

I haven't written this before, but I'm a huge fan of the Sudoku puzzles (google it if you have never heard of it).  When you get in a groove in Sudoku you feel like you blast through the puzzle, seeing every angle, every line, knowing every missing number... I feel the same way when I begin doing some of my homework assignments.  Visualization becomes more clear, focus is sharper, and I'm able to link things in my mind faster.

Homework tonight, finishing chapter 9 tomorrow.

Cheers!

Friday, March 14, 2014

Set back to sailing forward.. BLITZ!!!

So I've had a sluggish week with the injuries and some other things going on in my life, but I'm ready to rededicate myself to finishing this book and getting through the exam.

Through my "born to learn" forum buddies I'm motivated to get going again.  First thing is first is to do more home work each day.  Even if it is one or two problems.  It's important to keep all of this stuff fresh in my head, because I can tell after a week that I'm not as sharp as I was when I was doing this everyday.

A buddy from the forums posted a question about subqueries and I had to get my book out and read a bit through chapter four to remember what they were.  I remember them as queries that could be ran on their own, but because I have been using so many CTE's I couldn't explain them as well as if I had been doing homework each night…

So I'm back on the war path, time to blitz the rest of this book and pass this exam!!!

Cheers!!!

Wednesday, March 12, 2014

Setback

I haven't written anything for a few days due to some health setbacks specifically muscle spasms in my upper back and neck area. Sitting down at a computer had been extraordinarily painful, as has been driving in a car, or sitting at a desk. Standing seems to be ok, and if I'm active I notice it less.

Today I had finally had enough and went to the dr to get a diagnosis thankfully it was only muscle spasms. I'll be recovering for the next few days then will write more.

I plan on taking in some sql you tube videos while I'm resting.

-Cheers

Friday, March 7, 2014

Chapter 9 lesson 1 Views

Chapter 9 Lesson 1 deals with views.  Creating and altering views along with rules associated with views.  A couple of the rules that should be known:

  • Technically you can only have one SELECT statement in a view, but you can work around this by using a UNION clause
  • The CREATE VIEW statements must be the first statement in a batch
  • You can set the columns name of a view at the beginning of the statement instead of during the SELECT statement which looks like this:
CREATE VIEW Sales.salesrep (yearstarted, amountsold) WITH SCHEMABINDING
AS
SELECT
YEAR(A.hiredate),
SUM(B.sales)
FROM hr.employees AS A
  JOIN sales.metrics AS B
    ON a.empid = b.empid
GROUP BY YEAR(hiredate)

Instead of
CREATE VIEW Sales.salesrep WITH SCHEMABINDING
AS
SELECT
YEAR(A.hiredate) AS yearstarted,
SUM(B.sales) AS amountsold
FROM hr.employees AS A
  JOIN sales.metrics AS B
    ON a.empid = b.empid
GROUP BY YEAR(hiredate)

Small difference but worthing writing out and taking note of.

There are a few options when creating a VIEW as well.  These options can help with data integrity, and keeping things clean (a very technical way of putting it right?).

  1. Option one is WITH SCHEMABINDING binds the view to the tables that it refers to.  "The view cannot have its schema definitions changed unless the view is dropped" according to the training kit
  2. WITH ENCRYPTION makes it more difficult for users to see the SELECT text in the view
  3. WITH VIEW_METADATA returns the metadata of the view which could be a part of the DB-Library, or API.. instead of the base tables metadata.
  4. WITH CHECK OPTION is a filter that makes sure that when updating a view it restricts modifications that don't meet criteria that you set in the WHERE clause. 
I realized that I didn't really define what a view was… A view is an object that is similar to a table.  It holds data that is referenced to a base table.

To create view you write: CREATE VIEW abc.xyz WITH SCHEMABINDING AS.. (with schema binding optional)
To alter a view you write: ALTER VIEW abc.xyz WITH SCHEMABINDING AS… (with schema binding optional)
To drop a view you write: DROP VIEW abc.xyz;

Before creating a new view in place of an old one you should start with:
IF OBJECT_ID('abc.xyz', N'V') IS NOT NULL DROP VIEW abc.xyz
GO
CREATE VIEW abc.xyz…

Lastly, here is a way to check what views you currently have in your database:
USE (some DB)
GO
SELECT name
FROM sys.views;

That's all I've got for now…. more on Inline Functions later!!!

Cheers!

credit: querying microsoft sql server 2012; Itzik Ben-gan, Dejan Sarka, Ron Talmage

Thursday, March 6, 2014

Chapter 8 "Suggested Practice" Answers

Here are some of the answers to the books "suggested practices":

Question 1
ALTER TABLE production.categoriestest
ADD Categorystatus NVARCHAR(15) NOT NULL DEFAULT ''
--

Question 2
SELECT *
FROM production.products
--
INSERT INTO production.products (productid, productname, supplierid, categoryid, unitprice, discontinued)
VALUES (78, N'product HAMMERS', 1, 1, -10, 0)
--
ALTER TABLE production.products
DROP CONSTRAINT [CHK_Products_unitprice]
--
ALTER TABLE production.products WITH CHECK
ADD CONSTRAINT CHK_Products_unitprice CHECK(unitprice > 0)
--
ALTER TABLE production.products
SET IDENTITY_INSERT ON
--
SET IDENTITY_INSERT [Production].[Products] ON
INSERT INTO production.products (productid, productname, supplierid, categoryid, unitprice, discontinued)
VALUES (78, N'product HAMMERS', 1, 1, -10, 0)
--
ALTER TABLE production.products
DROP CONSTRAINT [CHK_Products_unitprice]

--Altering the column
UPDATE Production.Products
SET unitprice='168'
WHERE unitprice='-10';

So just a quick over view of what's going on here.  The first question asks you to create a column or add a row, but it errors out because the field is not allowed to be NULL, so you have to change your "alter table" code to make NULL values default to being blank which is written as '' (two single quotation marks next to each other).

The second question asks you add a row that contains a negative number for unitprice but because unitprice has a constraint in which it can't be a negative number this also fails.  You have to drop the [CHK_Products_unitprice] constraint then add the row.  Now you have a negative number in the column, try to add the constraint again, and it fails, basically telling you that you can't do that because there is a negative value in the column.  So you have to go back and alter the column which was the last bit of code I wrote, then reapply the constraint.  Fairly simple stuff of how to work with data, and making sure that your data has integrity.

Onto Chapter 9 tonight, I have no idea what chapter nine is about, but it will be exciting to learn something new!

Cheers!

Chapter 8, Creating and Altering Tables

I've dug into chapter 8 more heavily lately, mainly reading the chapter over once quickly, letting the ideas melt in my head, then twice to understand in more detail the objective of the chapter... and finally a third time to understand the exercises and try them myself.

Today I'll be completing the "suggested practice" at the end of chapter in which I'll need to know how to create tables, alter tables/columns, create constraints, and transfer columns (there are probably a few more things in there too).

I find it's almost more difficult to study chapters like 6 and 8 because working on them in TSQL is more difficult.  It's more like you need to understand the syntax and know how to write it, but more so WHY you would need to utilize these things.  I'll post some info on chapter 8 in my next post.  Hopefully I'll be completing the practice today at lunch, then again this evening when I get home.

Cheers!

Friday, February 28, 2014

Xquery

Alright, so I'm ready to dive into some Xquery discussions…

Xquery, is just as it sounds that you can query information from and XML document.  The thing that I had a hard time understanding about Xquery was due to the fact that I had been studying so much TSQL, and was comfortable with SELECT, FROM WHERE, ORDER BY, etc… and I assumed that the result that I would get from Xquery would be represented as a normal value that you could then use somewhere else (a CTE, apply, SELECT INTO).  It's  little different than that… while you can shread XML data into "normal" looking columns, most of the time the results that your query returns is in and XML format (a format that you can specify) which is one of the GOOD things about XML.

So to start with, going back to my example yesterday of the Bagdad Theater in Portland, OR… the XML looked like this:

<usa>
  <or>
    <portland>
      <southeast>
        <hawthorn>
          <thirtyseventh>
            <BagdadTheater>SeatThree</BagdadTheater>
          </thirtyseventh>
        </hawthorn>
      </southeast>
    </portland>
  </or>
</usa>


Well you have to turn this into an XML document, which would be a declaration like this (modified it and added data as you'll notice):

DECLARE @abc AS XML;
SET @abc = N '
<usa>
  <or>
    <portland>
      <southeast>
        <hawthorn>
          <thirtyseventh>
            <BagdadTheater>SeatThree</BagdadTheater>
          </thirtyseventh>
        </hawthorn>
      </southeast>
    </portland>
  </or>
</usa>'


Now we can query @abc

SELECT @abc.query('*') AS Complete_Sequence, 

That query is going to give us back the complete XML document (@abc).  

You can also query specific areas of the document:

SELECT @abc.query('data(usa/or/portland/southeast/hawthorn/37ave/BagdadTheater)') AS Element_theaterseat_data

Can you guess what would be returned?

A big piece of Xquery is the use of FLWOR (I call it flower) which stands for FOR, LET, WHERE, ORDER, RETURN.  FLWOR is similar to SELECT, FROM, WHERE in SQL when querying tables.  

In the example above we could use the FLWOR to get the same result, it would look like this:

FOR  $x in doc("Theater.XML")/usa/or/portland/southeast/hawthorn/37ave
LET $x /BagdadTheater
RETURN $x (I'm not sure about this part, so don't quote me on it)

This gives you an idea of how Xquery works… I've began to think about XML really as a big array that we've developed paths for and modes of navigation to move around.  I still think the best way to get comfortable with some XML is to work out some problems with it.  The book gives some very simple practice lessons that I am going to try, anyone else who is reading this and is at the same level as me I encourage to do the same thing!…. I hope this helps!

Thursday, February 27, 2014

My own words Xpath and Navigation

As I wrote in my last post I was going to study further XML so that I'm comfortable explaining a few of the concepts in my own words.

First XPath.  Xpath refers to a position in an XML documents, I saw a good example somewhere online as I was working on this that helped me visualize it more.  Suppose you want to get to some directions to a geographical location, for example you would like to go to the Bagdad Theater in Portland, Oregon (my home town), well you might write down the location something like this:

usa/or/portland/southeast/hawthorn/37ave/BagdadTheater

In XML this might look something like this:
<usa>
  <or>
    <portland>
      <southeast>
        <hawthorn>
          <thirtyseventh>
            </BagdadTheater>
          </thirtyseventh>
        </hawthorn>
      </southeast>
    </portland>
  </or>
</usa>


The Xpath for the Bagdad Theater could be presented as we saw it above: usa/or/portland/southeast/hawthorn/37ave/BagdadTheater

That's all Xpath is, it explains where something is, then you can reference what ever is there.  Simple right?

Next this leads us into Navigation.

DOM node tree
OR:
Node tree
Credit to W3schools.org for these pics!

They say a picture is worth of thousand words.  This picture helped me to understand what exactly an XML tree looks like, and what the relationship is within it.  It's not surprising that when it comes to navigation most books will talk about "child nodes", "siblings", "parents", and "descendants"... they do this because again XML is formatted like a tree (think family tree).  So there are children, parents etc.  To make navigation a little easier we have some abbreviations for the nodes:

This:    //     "Retrieves the context node and all its descendants"
This:    @    "Retrieves the specified ATTRIBUTE (remember attributes are always in quotes) of the context                     node"
This:    ..     "Retrieves the parent of the context node" I tend to think about Linux on this one, and how to                        navigate folders; ".." always goes up one folder

Hopefully this clears up some navigation within XML.

I'll write more on Xquery tomorrow... cheers until then!

Credits: 
wikipedia.org
w3schools.org
Querying Microsoft SQL 2012; Itzik Ben-Gan, Dejan Sarka, Ron Talmage

Wednesday, February 26, 2014

Continuing XML

I intended on reading chapter 8 last night, but I flipped to the end of chapter 7 first where my book mark was placed.  I then began going through some of the practice within chapter 7, and felt like I still needed to understand XML clearer.  Not just elements/attributes, but the use of xquery, xpath, navigation of an xml documents including parent/child nodes etc.  So my goal today is to drill deeper into these ideas and be able to write out exactly what these pieces of XML are and include some examples in my own words.  When I can describe things in my own words I feel like I'm grasping the concept more firmly.

For the record I did read some of chapter 8 as well which is about creating objects in the RDBMS such as tables, and views.  Chapter 8 is also going to cover adding constraints (and I think checks) to a table too.  Again, I find it interesting in the order that the book covers these things as creating tables is a fairly rudimentary piece of TSQL, and it seems like you should know that before you begin to create CTE's.  It will be fun to start combining some of the lessons that I've learned so far and playing around with Selecting data into another table or creating a view.  Cool stuff!

So I have my goals today:
-Be able to define xquery, what does it do?
-Define xpath and what it does
-Describe navigation through an XML document


Cheers!

Sunday, February 23, 2014

XML Follow Up

After I posted my homework and answers at SSC, I had a great response from one of the members there.  Here is a link to the forum, and the explanation of Russel Loski.  I believed that I've learned sufficient XML through my studies over the last week, but I understand that there is A LOT that I still don't know.  I know this because simply going to amazon.com and searching XML comes up with many different books to advance ones knowledge of the subject... I've only spent one chapter.  So I understand it in a broad sense, but I would need to study XML specifically for sometime in order to grasp the many other components of it.

As a side note, this is what I love about learning this technologies...  I could spend one chapter on a subject such as XML and get a rudimentary education on it.  However I could also spend five years to learn deeper and deeper how it works... there is nothing that is stopping me from doing so.  How cool!

With this all being said, I'm going to read through chapter 7 one more time, then push on to chapter 8.

Cheers all!

Wednesday, February 19, 2014

Homework and answers

So like I suggested for homework querying an XML document I went with the books suggestion which was to use the Resume column in humanresources.jobcandidates table (AdventureWorks).  The suggested practice was to find everyone's first and last name... then the secondary part of the problem was to find all candidates who lived in Chicago (location.city).  Here is the way that I found to do it:

WITH XMLNAMESPACES('http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/Resume' AS ns)
SELECT [Resume].value('(/ns:Resume/ns:Name/ns:Name.First)[1]', 'varchar(50)') AS Firstname,
[Resume].value('(/ns:Resume/ns:Name/ns:Name.Last)[1]', 'varchar(50)') AS Lastname
FROM HumanResources.JobCandidate
WHERE [Resume].exist('/ns:Resume/ns:Employment/ns:Emp.Location/ns:Location/ns:Loc.City[.="Chicago"]')=1


I'm going to post this to SSC to see if there is an easier way of doing this (meaning an easier way of writing it, not optimization wise), and what are the search methods when values are attributes instead of elements as the resumes were.

Cheers!

Answers soon

Working through chapter 7 while mixing in homework from the other chapters. I'm literally dreaming about SQL! 

I'll post some answers to the "simple" XML queries that I suggested doing the other day, such as searching for the first and last name of someone.

Cheers!

Sunday, February 16, 2014

Sandbox Homework

As opposed to writing out business querying objects, and working through them in SQL.  Tonight I spent time creating objects on my virtual machine, as well as my native machine.  I worked on creating a database that had a few different tables in it, and had a compound primary key on one of the tables.  When I started looking into these things (especially the second piece) I came across a blog that I thought was interesting because it discussed the use of ssn#'s as primary keys, and whether or not it was a good idea.  The overall answer was no, that it was not a good idea to use them as PK's.

For the database that I'm creating (contacts in my phone book) I want to use the phone number as the primary key, but not all of these are unique because they can be assigned to multiple "people"... therefore I began looking at the idea of a composite primary key.  I would likely have to add the phone number and firstname as the composite key... anyways, I'm working on it.  It's fun play around with this stuff in SQL, as I have in Synergex in the past.  The phone numbers are currently stored as VARCHAR(12), it'll be fun to removing the "-" from the phone numbers (if they exist), and casting them into an integer.

Once I have my DB setup, I'll want to be able to link up to it from my current cpu instead of the virtual one I currently am working on.  If anyone is reading this, I encourage you to do the same thing, as it proves valuable in navigating SQL, not just TSQL.

Saturday, February 15, 2014

XML Concepts

As I posted yesterday, I am going to try to articulate the easiest way to understand XML from what I know so far.  First off as I wrote previously, XML doesn't do anything, it's just a different way to format data that makes it easier to transport.

XML documents look something like this (taken from here):
<bookstore>
  <book category="CHILDREN">
    <title>Harry Potter</title>
    <author>J K. Rowling</author>
    <year>2005</year>
    <price>29.99</price>
  </book>
  <book category="WEB">
    <title>Learning XML</title>
    <author>Erik T. Ray</author>
    <year>2003</year>
    <price>39.95</price>
  </book>
</bookstore>

The first few lines of code for above:
<bookstore>
<book category = "CHILDREN">
</bookstore>

Is the same as
<bookstore>
<book category> CHILDREN </book category>
</bookstore>

Can you seen the differences?  The difference is that the first example of the short code includes elements and attributes… the best way to recognize an attribute is that an attribute always has quotation marks.  Elements are a little trickier because elements can CONTAIN "other elements, text, attributes, or a mix of both" according to w3schools.com.

So is it better to have an element centric XML code, or attribute centric?  After a google search here is a  decent explanation taken from stack overflow:

Attribute centric
  • Smaller size than element centric.
  • Not very interoperable, since most XML parsers will think the user data is presented by the element, Attributes are used to describe the element.
  • There is no way to present nullable value for some data type. e.g. nullable int
  • Can not express complex type.
Element centric
  • Complex type can be only presented as an element node.
  • Very interoperable
  • Bigger size than attribute centric. (compression can be used to eliminated the size significantly)
  • Nullable data can be expressed with attribute xsi:nil="true"
  • Faster to parse since the parser only looks for elements for user data.
The next thing to be able to identify would be the namespace within XML: 
I think about a namespace kind of like how you would us the AS feature in TSQL… say during an inner join.  
(SELECT h.empid, o.firstname
FROM hr.employees AS h
INNER JOIN 
hr.oldemployees AS o 
ON h.empid = o.empid)

The name spaces would be the h or o in h.xxxxx or o.xxxxx.

That's a brief overview of namespaces, I learned a lot from this link that explained namespaces very well.  It discusses when they are created, and how they are used in a practical sense.  

Well that's gets us a few ideas about XML (elements, attributes, and namespaces) but there is a lot more to go.  It's important to know these things because as far as the 461 test goes, I know that XML is something that is covered.  

As homework for Chapter 7 the book has a few suggested practices which I think are straight forward and easy to work with they suggest: Use a simple XML column on a sample database, and search it for  elementary things like; a persons first name, a persons last name, a persons age, etc.  Here is a bit of code found on stackoverflow that can help you to find the columns in all of your databases that contain XML data:

SELECT table_name [Table Name], column_name [Column Name], 

FROM information_schema.columns where data_type = 'XML'

Cheers until next time!