The Definition of Tedious is Error Log Analysis

It’s Monday, 8 AM

Real world example. Its 8 AM on a Monday morning. Nothing was entered into Change Management that should impact your SQL world in any way. Yet, the smartphone is BLIP, BLIP BLIP with new email messages and the office phone is ringing so much it is about shake itself off the desk. Where is the first place to start looking for problems? The Error Log.

The Log Book

If you are like me, wading through scores of error logs is not my cup of tea. However, I do realize the benefit of the error log and the wealth of information it contains. There just has to be some better way of fuddling through the log.

Wouldn’t it be a great idea if there were someway that the error log could be viewed as a table? As a table, the DBA could do what good DBA’s do and write scripts to pull out information that is worth while.

Here are a list of great sites that explain how to do just that…

Scripts

I borrowed from the above and others and have a few scripts that can come in handy. One thing to note, using “master.sys.xp_readerrorlog” instead of “sys.xp_readerrorlog” will give you the ability to filter on dates and the ability to sort. By default, this extended procedure will return the oldest rows first.

Error Log Location

   1: --location of error logs

   2: EXEC master.sys.xp_readerrorlog 0, 1, N'Logging SQL Server messages in file', NULL, NULL, N'asc' 

Error Log Archive Information

   1: -- Get list of all ERROR logs

   2: -- 0 is the current one

   3: EXEC master.sys.xp_enumerrorlogs 

Error Log Script

   1: /*

   2: @pLogFile             The error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2 

   3: -- by default there are upto 6 error logs but could be as many as 99 depending on setup

   4: @pLogType             Log file type: 1 = error log, 2 = SQL Agent log

   5: @pLogSearchString1    The first string to filter by

   6: @pLogSearchString2    The second string to filter by

   7: @pLogSearchBegin      Begin Date to filter results 

   8: @pLogSearchEnd        End Date to filter results

   9: @pLogSortOrder        Sort order ascending or descending 

  10: 

  11: All of the above parms are NULL-able except @pLogType and @pLogSortOrder.

  12: */

  13: DECLARE   @pLogFile              INT = 0

  14: DECLARE   @pLogType              INT = 1

  15: DECLARE   @pLogSearchString1     VARCHAR(255) = NULL

  16: DECLARE   @pLogSearchString2     VARCHAR(255) = NULL

  17: DECLARE   @pLogSearchBegin       DATETIME = NULL --'07/23/2012 00:00:00.000'

  18: DECLARE   @pLogSearchEnd         DATETIME = NULL --'07/23/2012 23:59:59.999'

  19: DECLARE   @pLogSortOrder         VARCHAR(4) = N'desc'

  20:  

  21: EXEC master.sys.xp_readerrorlog @pLogFile,@pLogType,@pLogSearchString1,@pLogSearchString2,@pLogSearchBegin,@pLogSearchEnd,@pLogSortOrder

Of course, any of this data can be saved/ persisted to a table on the local server or on a centralized server. An example saving this to a temp table would look similar to this.

Save Error Log to ##TmpError

   1: -- Same params as above example

   2: DECLARE   @pLogFile              INT = NULL --0

   3: DECLARE   @pLogType              INT = 1

   4: DECLARE   @pLogSearchString1     VARCHAR(255) = NULL

   5: DECLARE   @pLogSearchString2     VARCHAR(255) = NULL

   6: DECLARE   @pLogSearchBegin       DATETIME = NULL --'07/23/2012 00:00:00.000'

   7: DECLARE   @pLogSearchEnd         DATETIME = NULL --'07/23/2012 23:59:59.999'

   8: DECLARE   @pLogSortOrder         VARCHAR(4) = N'desc'

   9:  

  10: -- Drop the temp table if it still exists

  11: IF EXISTS (SELECT name FROM tempdb..sysobjects WHERE name = '##_TmpError')

  12:     DROP TABLE ##_TmpError

  13:     

  14: -- Create a temp table structure to hold the data    

  15: CREATE TABLE ##_TmpError  

  16:     (

  17:       ErrorLogDate DateTime null

  18:     , ProcessInfo  NVARCHAR(25) null

  19:     , ErrorText    NVARCHAR(1000) null

  20:      )

  21:  

  22: -- Insert the records into the temp table

  23: INSERT INTO ##_TmpError EXEC master.sys.xp_readerrorlog @pLogFile,@pLogType,@pLogSearchString1,@pLogSearchString2,@pLogSearchBegin,@pLogSearchEnd,@pLogSortOrder

  24:  

  25: -- Retrieve the records for inspection

  26: SELECT ErrorLogDate, ProcessInfo, ErrorText FROM ##_TmpError

  27:  

  28:Drop the temp table     

  29: DROP TABLE ##_TmpError

Although my example shows the process dumping the records to a temp table, I cant think of an actual real world example where this would have benefit. I don’t persist my error log, but if I did, I would be highly selective as to what was saved and how long I saved it.

Categories: How to | Tags: , , , | Leave a comment

How to find a needle in a hay stack

Where is my lost object?

Interesting string of events happened today. A developer noticed that an automatic email wasn’t being generated from the database with hundreds of triggers, tables, stored procedures and jobs. He knew that a DBA (now retired) created an object of some kind that was automatically generating emails. What he didn’t know was where or what kind of object. The light shining on the matter is that the DBA who created the object had enough forethought to document the name of the object in the body of the email. So, now knowing the name of the object, how can the new DBA find the object in the database?

Script

You will need to know at a minimum, the database and object name. The object should be at least two levels deep. Otherwise, revise the join clause.

   1: USE [Database Name];

   2: GO

   3:  

   4: DECLARE @ObjectId int

   5: DECLARE @ObjectName nvarchar(128)

   6: SELECT  @ObjectName = '[Object Name]'

   7: SELECT  @ObjectId = OBJECT_ID(@ObjectName)

   8:  

   9:  

  10: SELECT [SO1].[name] AS [Child Object], [SO1].[id] AS [Child Object ID], 

  11: CASE  

  12:     WHEN [SO1].[xtype] = 'AF' THEN 'Aggregate function (CLR)'

  13:     WHEN [SO1].[xtype] = 'C'  THEN 'CHECK constraint'

  14:     WHEN [SO1].[xtype] = 'D'  THEN 'Default or DEFAULT constraint'

  15:     WHEN [SO1].[xtype] = 'F'  THEN 'FOREIGN KEY constraint'

  16:     WHEN [SO1].[xtype] = 'L'  THEN 'Log'

  17:     WHEN [SO1].[xtype] = 'FN' THEN 'Scalar function'

  18:     WHEN [SO1].[xtype] = 'FS' THEN 'Assembly (CLR) scalar-function'

  19:     WHEN [SO1].[xtype] = 'FT' THEN 'Assembly (CLR) table-valued function'

  20:     WHEN [SO1].[xtype] = 'IF' THEN 'In-lined table-function'

  21:     WHEN [SO1].[xtype] = 'IT' THEN 'Internal table'

  22:     WHEN [SO1].[xtype] = 'P'  THEN 'Stored procedure'

  23:     WHEN [SO1].[xtype] = 'PC' THEN 'Assembly (CLR) stored-procedure'

  24:     WHEN [SO1].[xtype] = 'PK' THEN 'PRIMARY KEY constraint (type is K)'

  25:     WHEN [SO1].[xtype] = 'RF' THEN 'Replication filter stored procedure'

  26:     WHEN [SO1].[xtype] = 'S'  THEN 'System table'

  27:     WHEN [SO1].[xtype] = 'SN' THEN 'Synonym'

  28:     WHEN [SO1].[xtype] = 'SQ' THEN 'Service queue'

  29:     WHEN [SO1].[xtype] = 'TA' THEN 'Assembly (CLR) DML trigger'

  30:     WHEN [SO1].[xtype] = 'TF' THEN 'Table function'

  31:     WHEN [SO1].[xtype] = 'TR' THEN 'SQL DML Trigger'

  32:     WHEN [SO1].[xtype] = 'TT' THEN 'Table type'

  33:     WHEN [SO1].[xtype] = 'U'  THEN 'User table'

  34:     WHEN [SO1].[xtype] = 'UQ' THEN 'UNIQUE constraint (type is K)'

  35:     WHEN [SO1].[xtype] = 'V'  THEN 'View'

  36:     WHEN [SO1].[xtype] = 'X'  THEN 'Extended stored procedure'

  37: END AS [Child Type]

  38: ,OBJECT_NAME([SO1].[parent_obj]) AS [Parent Object]

  39: ,CASE  

  40:     WHEN [SO2].[xtype] = 'AF' THEN 'Aggregate function (CLR)'

  41:     WHEN [SO2].[xtype] = 'C'  THEN 'CHECK constraint'

  42:     WHEN [SO2].[xtype] = 'D'  THEN 'Default or DEFAULT constraint'

  43:     WHEN [SO2].[xtype] = 'F'  THEN 'FOREIGN KEY constraint'

  44:     WHEN [SO2].[xtype] = 'L'  THEN 'Log'

  45:     WHEN [SO2].[xtype] = 'FN' THEN 'Scalar function'

  46:     WHEN [SO2].[xtype] = 'FS' THEN 'Assembly (CLR) scalar-function'

  47:     WHEN [SO2].[xtype] = 'FT' THEN 'Assembly (CLR) table-valued function'

  48:     WHEN [SO2].[xtype] = 'IF' THEN 'In-lined table-function'

  49:     WHEN [SO2].[xtype] = 'IT' THEN 'Internal table'

  50:     WHEN [SO2].[xtype] = 'P'  THEN 'Stored procedure'

  51:     WHEN [SO2].[xtype] = 'PC' THEN 'Assembly (CLR) stored-procedure'

  52:     WHEN [SO2].[xtype] = 'PK' THEN 'PRIMARY KEY constraint (type is K)'

  53:     WHEN [SO2].[xtype] = 'RF' THEN 'Replication filter stored procedure'

  54:     WHEN [SO2].[xtype] = 'S'  THEN 'System table'

  55:     WHEN [SO2].[xtype] = 'SN' THEN 'Synonym'

  56:     WHEN [SO2].[xtype] = 'SQ' THEN 'Service queue'

  57:     WHEN [SO2].[xtype] = 'TA' THEN 'Assembly (CLR) DML trigger'

  58:     WHEN [SO2].[xtype] = 'TF' THEN 'Table function'

  59:     WHEN [SO2].[xtype] = 'TR' THEN 'SQL DML Trigger'

  60:     WHEN [SO2].[xtype] = 'TT' THEN 'Table type'

  61:     WHEN [SO2].[xtype] = 'U'  THEN 'User table'

  62:     WHEN [SO2].[xtype] = 'UQ' THEN 'UNIQUE constraint (type is K)'

  63:     WHEN [SO2].[xtype] = 'V'  THEN 'View'

  64:     WHEN [SO2].[xtype] = 'X'  THEN 'Extended stored procedure'

  65: END AS [Parent Type]

  66: ,[SO1].[parent_obj] AS [Parent Object ID]

  67: from sys.sysobjects AS [SO1]

  68: Inner Join sys.sysobjects AS [SO2]

  69:     ON [SO1].[parent_obj] = [SO2].[id]

  70: where [SO1].[id]= @ObjectId

Categories: How to | Tags: , , | Leave a comment

Monitoring Backups

Now that you have a good grasp on what needs to be backed up and you also have a mechanism in place to capture the backup to meet your customer’s needs one more step needs to take place to cross “Backups” off the checklist. Monitoring the backup process is nearly just as important as doing it. Reminds me of the philosophical question, “If a tree falls in the forest does it make a sound?”. Our twist on that question is: If a DBA doesn’t monitor the backup process, did the DBA get a good backup?

So how can a DBA monitor the backup process while not spending all day weeding through job history and log files? The solution is to automate the process. There are a number of commercial products out there that will do similar things to what I am about to describe. The process I will show you is a culmination of other DBA’s work along with my own. In other words, I am not taking full credit for any of the scripts and they are provided as is without any warranty or guarantee. However, these are the actual scripts I use in my environment and they have served me well as a glimpse into my environment so that I can get on to more important projects instead of scouring through log and history files.

This script file should be ran everyday. However, if you choose not to check everyday update the where clause so that the query will back records according to your time table. NOTE: this should be compatible with SQL 2000 forward.

   1: SELECT

   2:      CONVERT(NVARCHAR(100), SERVERPROPERTY('Servername')) AS [ServerName]

   3:     ,[database_name]        --nvarchar(128)    Name of the database involved in the backup operation. Can be NULL. 

   4:     ,[compatibility_level]    --tinyint        Compatibility level setting for the database. Can be:

   5:                             --                    80 = SQL Server 2000 

   6:                             --                    90 = SQL Server 2005 

   7:                             --                    100 = SQL Server 2008

   8:                             --                Can be NULL. For more information about compatibility levels, see sp_dbcmptlevel (Transact-SQL).

   9:                             --                http://msdn.microsoft.com/en-us/library/ms178653.aspx 

  10:     ,[backup_size]            --numeric(20,0)    Size of the backup set, in bytes. Can be NULL. 

  11:     ,[type]                    --char(1)        Backup type. Can be:

  12:                             --                    D = Database 

  13:                             --                    I = Differential database 

  14:                             --                    L = Log 

  15:                             --                    F = File or filegroup 

  16:                             --                    G =Differential file 

  17:                             --                    P = Partial 

  18:                             --                    Q = Differential partial 

  19:                             --                Can be NULL. 

  20:     ,[backup_start_date]    --datetime        Date and time the backup operation started. Can be NULL. 

  21:     ,[backup_finish_date]    --datetime        Date and time the backup operation finished. Can be NULL. 

  22:     ,[expiration_date]        --datetime        Date and time the backup set expires. Can be NULL. 

  23:     ,[name]                    --nvarchar(128)    Name of the backup set. Can be NULL.     

  24:     ,[user_name]            --nvarchar(128)    Name of the user performing the backup operation. Can be NULL. 

  25:     ,GETDATE() AS [ReportedDate]             

  26: FROM

  27:     msdb.dbo.backupset

  28: WHERE

  29:     1=1

  30:     AND [backup_start_date] BETWEEN DATEADD (hh, -24, GETDATE()) AND GETDATE() 

Keeping History

Now that you have a query to run, who wants to remember to run this on every server in the DBA Environment? In an upcoming post I will describe a methodology for storing and gathering metrics on a daily basis.

Categories: How to, Maintenance | Tags: , , , | Leave a comment

Top Priority: Backups

As discussed in the last post, backups should be your top priority. With that said, the next thing to discuss is what kind of backups and how often to make them. Microsoft has an excellent page explaining SQL Server backups. There are three types of recovery models, but I will only cover two of them here: Simple and Full. Next to consider are backup types and for our scope we will only address Full, Differentials, and Logs.

So, how does the DBA determine what is appropriate backup strategy for the SQL Environment? Understanding uptime and costs associated with a recovery strategy will help foster discussions so that those who hold the purse strings make the right decision for the right cost.

For the DBA, a simple recovery model with nightly full backups will get you started with your backup strategy but may not meet the requirements of the customer needing a point in time recovery. A more sophisticated backup strategy will encompass a recovery model of full and two or more backup types. My personal preference is to do full backups twice a week generally on Wednesdays and Sundays with differentials on the remaining days of the weeks. Depending on the requirements, I will augment the strategy with more frequent full and differentials as well as adding in log backups as frequent as every hour to as sparse as every eight hours. Considerations for any strategy will be recovery time and backup storage space. Obviously, the time to restore one full backup file will take considerably less time than to also need to roll forward with differentials and log files. The customer will need to help guide the decision between which has more value to them having a quicker recovery time but a larger footprint in backup storage or a longer recovery time and a smaller footprint in backup storage. One last discussion topic is the retention schedule of the backups. The retention schedule could help offset some costs by cutting down on how long and how many of the backups will need to be saved.

Categories: How to, Maintenance | Tags: , , , | Leave a comment

DBA Top 10 To Do List

To start the ball rolling, get a routine. If there isn’t a routine, it will be hard to determine when the routine is off.Having a routine makes it easier to train others coming up the ranks or just have someone fill in for vacation days. Personally, I find a routine helps me focus my early morning energy on the mundane and opens the afternoon to work out issues found in the morning, or better yet, the projects that take a back burner when dealing with emergency issues. Speaking of emergency issues, without a routine, everything feels like an emergency.

There are a number of sites that have a top 10 and I don’t claim to have a special spin on the tasks at hand other than to underline the importance of making the time to these tasks.

  1. Back Ups!
    There is no argument that back ups are important the general question is what kind and how often. Surprising, not all (so called) DBAs treasure a SQL server database. As a DBA don’t let anyone talk you out of doing your own database backup. A server backup is not the same and does not give you options or flexibility when recovering. I have discovered that a restore process is required more often because of end user error than an actual disaster event such as hardware failure.

    My Preference on a DB recovery plan is to use a FULL recovery model with nightly backups, differentials twice during the day at 10 am and 3 pm. (We operate on a normal 8-5 business day), and log files every hour. Not all servers and databases are created equal. As time and space requirements impend on my backup schedule, this will get adjusted as needed. I should also note that I like consistency, so if I need to change for one database on the server, all of the database back processes change for the whole server. This just makes it easier to administer, maintain and remember about the back up process as a whole.One last note on server backups, they are needed. I just prefer the server backup everything except my ldf, mdf, and ndf files. Most server backup software allows for exceptions.

  2. Restore Backups Periodically.
    It is not enough to perform a backup job and never test to make sure that the backup is recoverable. This can be a time consuming process but one that will help you sleep better. More than likely your backup process is automated and if you are like me have another automated process to check to make sure that it completed successfully. One thing is for certain and that is technology is designed to fail. With that in mind, take a day out of the week to test one back up from one server. If nothing else it will keep you familiar with the process and during a real DR event, you will be glad that perfect practice makes perfect.
  3. Monitor Logs.
    Microsoft has done a lot to make alerts and notifications available in more than one format. For the minimalist, there are the raw logs and event viewer. For the technical savvy alerts and messages can be forwarded or even emailed. I recently read an article for creating a special job to push the log information to a SQL Server table so that they could be queried and reports produced. In other words, there is no reason why the logs remain an untapped resource for proactively responding to server issues.
  4. Manage SQL Server Jobs
    Now that we are familiar with the logs and alerts, it is natural to discuss the need to monitor our automated jobs. Jobs that run successfully for weeks, months, or even years will inevitably fail once in a while. I see all to often that a jr DBA will create a job and with great confidence never test or check to see how successfully it runs. Personally, I like to track my job outcomes in a monitoring database. This allows me easy access to the job history for all of the servers I maintain. I can easily tell by one query which servers need immediate attention.
  5. Database Integrity.
    A lot can be found on the need for running integrity checks on the database periodically. This has really proven beneficial for older and larger databases. Best practice states that the DBCC commands should run before full backups. I don’t disagree with the statement, but I don’t always follow it. DBCC can be time consuming and resource intensive. Even though we are operate 8-5 Monday thru Friday, there are several off hour processing jobs that run on several of our servers that limit our time frame for nightly backups.Our method is to run the integrity checks over the weekend. This job does produce a report and should be evaluated after the run. Some issues will cause the job to fail, if it fails the report will guide you to the underlying issue or issues.
  6. Maintain Indexes.
    As the database grows by inserts or shrinks by deletes, the indexes will get fragmented and need maintenance. The best thing to automate this process is Smart Maintenance. This will include scripts to backup and do integrity checks. (Thanks Derrick B. for turning me onto the Smart Maintenance)
  7. Create Documentation.
    The best ally in time of a melt down is good documentation. It is sad that not everyone sees the need for documenting what they are doing either to help them to remember or for the poor sap who comes after them. Out of all of the items mentioned here, this one is the easiest to start and requires the least amount of skill to do. Start small, but plan big. In other words, start by documenting all of the SQL Servers you maintain include things like service log ins, the version of SQL server to the point release. Include maintenance jobs and their schedules. You may consider tracking purchase and license information or at least a pointer to it such a a purchase order number. What I mean by planning big is to consider this the start of your disaster recovery plan document. Understanding what your current environment looks like will go a long way if you ever need to rebuild from scratch.
  8. Performance Monitor.
    If you are a DBA then I know you have heard, “The database is slow.” at least once. We know (with a wink) that it is usually the application code or a poorly written query that is to blame. So how can we defend ourselves? Monitor the server’s performance. There are several tools available within the OS and SQL Server to at least get us the needed ammunition to defend the honor of SQL Server. The common tools to use are Performance Monitor (perfmon), SQL Profiler, Execution Plans, Index Tuning, DB Tuning advisor. There are tons of articles to explain how to use these. Generally the best place to start is with the performance monitor. This will help to at least give an over-all view of the server. I specifically look at CPU, Buffer Pools, Disk and Network. If there is a problem, you will see it in one of these areas first.
  9. Security
    This item gets over looked a lot. The story goes along the lines that a manager requests an employee access to a database. So a “new user” is created at the database level. If your lucky, it was a network credential and not a sql login. Now the employee is either no longer employed or their job duties have changed an no longer require access, but no one has told the DBA area to revoke access.

    If your company uses AD, I highly recommend moving toward administering SQL groups as opposed to individual logins. This will be the first step towards turning off mixed mode authenticity. Clean up the users by moving them into groups and allowing them access via the groups. The hard sell will probably be your developers who use and “application” or “sql” login. Start now by talking and learning about Kerberos security. Meet your AD administer and start talking Kerberos. Before you know it, your AD manager will be on your side asking the development manager why they are creating security holes in the network.

  10. Disaster Recovery Planning
    Out of the ten items this will be the most comprehensive and possibly the most time consuming. But remember the journey of a thousand miles starts with the first step. Start by documenting how things look right now. This will become your baseline. Next, begin documenting your day-to-day issues that crop up like a failed backup job. You are right, a failed job isn’t exactly a DR event, but this will get your team thinking about the process and documenting their process. These notes can then be compiled and published so that you have a resource for your jr DBAs.This will naturally grow and should in compass issues that can occur at the server, instance, database, and table level. Finally a plan of priorities need to be established. This list should be reviewed by all areas of IT and business. You don’t want one area to have a high priority on an application server that uses a database that is staged on a low priority database server.

There is a lot here, and even more on line. With this I will leave you with a story. A farmer was looking for a hired hand and a rather rough and aged looking man approaches the farmer. The man asks about the job and the farmer asks for his qualifications to which the man replies, “I sleep soundly during a storm.” To this the farmer smiled timidly not knowing what he meant, but the farmer desperate for help hired the man. About a month into the job a terrible storm awoke the farmer about 3 a.m. The farmer in a panic runs out and finds the hired man fast asleep. The farmer tries to wake the man and is shouting. The farmer says, “wake up! wake up! The fence needs mending, the cows might escape!” The hired hand in a gruff half asleep voice said, “No worries. The cows are in the barn and the fence is fixed.” The farmer still panicking says “What about the chicken and horses? We need to make sure they are in the barn too.” Again the hired man says “Not to worry, they are all tucked away in the barn. All is good.” The farmer retorts, “Well! what about the barn roof, it leaks! We can’t sleep knowing that the barn roof leaks!” But the irritated hired man rolls over and calmly stated, “Sir, when you hired me I told you my qualification was that I like to sleep soundly through a storm. I took care of the roof, the chickens, the horses, and the cows all before the storm came so that I could sleep through the storm.”

The moral of the story is that now when the sun is shinning, is the time to prepare for when the storm comes. Be ready with good documentation, a good back up and a great all around maintenance plan and you too will be able to sleep soundly during the storm.

Categories: Maintenance | Tags: , , , , , , , | Leave a comment

How to copy a table from one database to another.

Scenario

A few days after knighted as the agency’s DBA I was approached by a customer requesting a table be migrated from one database to another on an all together different server. Feeling overly confident I agreed. On the surface it seems fairly straight forward process. This is how I handled the situation. This was a fairly small table consisting of roughly 100 rows. Had the table been larger I would have looked into bulk copy or SSIS.

The Process

  1. Make a clean back up of the source and target database.
  2. Restore the source database on the target server.
  3. Using the right click menu for the source table, generate a script to create the table.
  4. Edit the create table script if anything needs to be changed.
  5. Run the create table script against the target database.
  6. Using the right click menu for the target table, generate a script to insert.
  7. Using the right click menu for the source table, generate a script to select.
  8. Combine the two queries so that they look figure 1.
  9. Execute the Insert-Select.
  10. Drop the source database.

Figure 1

ScreenShot004

Things to consider

  • The size of the source database.
  • The sensitivity of the data. Will restoring the full database break policy or breach security if restored on the target server?
  • The size of the table and number of rows for the insert.
  • The possible use of a link server.
  • The possible use of SSIS.
Categories: How to | Tags: , , | Leave a comment

Hello world!

Welcome to WordPress.com! This is your very first post. Click the Edit link to modify or delete it, or start a new post. If you like, use this post to tell readers why you started this blog and what you plan to do with it.

Happy blogging!

Categories: Uncategorized | 1 Comment

Blog at WordPress.com.