Transact-SQL
is the language used to create objects and access data in SQL Server.
Data Manipulation Language, (DML) is part of the Transact-SQL language
that allows you to insert, modify, and delete data in SQL Server
tables. The core statements that constitute DML are INSERT, UPDATE, DELETE, and MERGE. In this chapter you will only use these statements in a basic way, as they are outside the scope of this book and exam.
Data
Definition Language (DDL) is a subset of Transact-SQL that deals with
creating database objects such as tables, constraints, and stored
procedures. You will examine these statements in depth as they are
mapped directly to the exam objectives. SQL Server 2008 Management
Studio provides a rich user interface for creating these database
objects. However, not all functionality is available within the user
interface, and often you will use DDL scripts to create your database
objects. The SQL Server 2008 Management Studio user interface simply
allows you to create an underlying DDL statement using the appropriate GUI component. Figure 1 shows the user interface for creating a table. The DDL statement for creating the same table is shown in Example 1.
Tip
Remember that the user interface provided by SQL Server 2008 Management Studio allows you to visually design DDL
statements. Any task that is available in SQL Server 2008 Management
Studio can be completed using a DDL script, but not all options
available within a DDL script are available within the user interface.
The key DDL statements are CREATE, ALTER, and DROP. The CREATE statement creates a SQL Server database object, like a table, view, or stored procedure. Example 1 creates a new table named Produce and a new view named Fruits. In this example we also use the INSERT DML statement to add three rows into our new table.
Example 1. Using the CREATE DDL Statement to Create a New Table and View
USE AdventureWorks;
GO
-- Use the CREATE DDL statement to create a new table named Produce
CREATE TABLE Produce
(ProductID int PRIMARY KEY,
ProductName varchar(50),
ProductType varchar(20))
-- Use the INSERT DML statement to add rows to the Produce table
INSERT Produce VALUES
(1, 'Tomato', 'Vegetable'),
(2, 'Pear', 'Fruit'),
(3, 'Kiwifruit', 'Fruit');
GO
-- Use the CREATE DDL statement to create a new view named Fruit that shows us only produce of type 'Fruit'
CREATE VIEW Fruit AS
SELECT * FROM Produce WHERE ProductType = 'Fruit';
GO -- Use the SELECT statement to view the data in the Fruit View
SELECT * FROM Fruit
-- Results:
-- ProductID ProductName ProductType
-- --------- ----------- -----------
-- 2 Pear Fruit
-- 3 Kiwifruit Fruit
|
The ALTER DDL
statement changes an existing object and it can be used to add or
remove columns from a table. You can also use this statement to change
the definition of a view, stored procedure, trigger, or function. Example 2 adds a Price column to the Produce table we have created in Example 1. In this example we also redefine the view to include the new Price column. Do not confuse the ALTER statement, which changes an object definition, with the UPDATE statement, which changes data in a table.
Example 2. Using the ALTER DDL Statement to Add a New Column to a Table and Redefine a View
-- Add a new column
ALTER TABLE Produce
ADD Price Money;
GO
-- Use the UPDATE statement to set prices
UPDATE Produce SET Price = 2.50 WHERE ProductID = 1;
UPDATE Produce SET Price = 3.95 WHERE ProductID = 2;
UPDATE Produce SET Price = 4.25 WHERE ProductID = 3;
GO
-- Redefine the view
ALTER VIEW Fruit AS
SELECT ProductID, ProductName, Price FROM Produce WHERE ProductType = 'Fruit';
GO
SELECT * FROM Fruit
-- Results:
-- ProductID ProductName Price
-- --------- ----------- -----
-- 2 Pear 3.95
-- 3 Kiwifruit 4.25
|
The DROP DDL
statement removes an object from the database. If other objects depend
on the object you are attempting to drop, this statement will not
succeed and an error will be raised. Example 3
deletes data from the Produce table, and then removes both the Fruit
view and the Produce table from the database. In this example, we also
attempt to drop the Person.Contact table. This operation will fail, as
other objects depend on Person.Contact. Do not confuse the DROP statement, which removes an object from the database, with the DELETE statement, which deletes data from a table.
Example 3. Using the DROP DDL Statement to Remove Tables and Views from a Database
DELETE FROM Produce;
SELECT * FROM Fruit;
-- Results:
-- ProductID ProductName Price
-- --------- ----------- ------
-- (0 row(s) affected)
DROP VIEW Fruit;
GO
DROP TABLE Produce;
GO
DROP TABLE Person.Contact;
-- Results:
-- Msg 3726, Level 16, State 1, Line 1
-- Could not drop object 'Person.Contact' because it is referenced by a FOREIGN KEY constraint.
|
When preparing for the exam, ensure you have practiced using the CREATE, ALTER, and DROP DDL
statements and have achieved a good understanding of the DDL syntax.
The AdventureWorks sample database is a great tool for learning
Transact-SQL without the risk of damaging your live databases.
Earlier examples in this chapter have made use of INSERT, UPDATE, and DELETE DML statements. These DML operations (as well as the MERGE
statement) are logged operations. When an operation is logged, data
about the operation is stored in the SQL Server transaction log. The
transaction log files can be backed up and replayed into an earlier
database backup. Although the log replay functionality is slow, it
allows you to restore the database to the point in time when the
database file was lost.
For
performance reasons, some operations that affect SQL Server data can be
performed as nonlogged or minimally logged. This means that the
information about these operations is not fully recorded in the SQL
Server transaction log. Nonlogged operations offer much better
performance than logged operations. If a nonlogged operation occurs
after the database has been backed up, you will not be able to replay
the logs into the database after you have restored the database from
backup.
The following DML operations are either nonlogged or minimally logged, depending on database recovery model:
You
cannot use these statements when publishing tables as part of
replication. The selected database recovery model determines what
transactions are recorded in the transaction log. Three recovery models
are available for SQL Server 2008 databases:
When
the database recovery model is set to Simple, log files are reused as
soon as they become full. This means that the transaction logs use up
very little space, and you don’t need to worry about log file
management. However, when a database is set to the Simple recovery
model and the database file is lost, you will not be able to recover
any changes made after the last full backup. You will also not be able
to recover to a point-in time, as transaction details are stored in
transaction logs that have been overwritten in this case.
The
Full recovery model could be said to be the opposite of the Simple
recovery model. Transaction logs are kept, and all transactions without
exception are written to the logs. This includes nonlogged operations
like TRUNCATE TABLE and SELECT...INTO.
Although you lose the performance advantages of nonlogged operations
with this recovery model, all data is recoverable provided transaction
logs are intact. You can also restore to a point in time if necessary.
The
Bulk-Logged recovery model is similar to the Full recovery model,
except that nonlogged operations are performed as nonlogged. This
provides a performance advantage for Bulk-Logged operations. If a
Bulk-Logged operation has occurred since the last full backup, you will
not be able to recover any changes made since the last full backup. The
Bulk-Logged recovery model does not support point-in-time recovery.
In
production environments, the full database recovery model is generally
used, as it ensures maximum recoverability. If you wish to perform a
high-performance nonlogged operation, you can temporarily switch the
recovery model to Bulk-Logged, perform the operation, switch the
recovery model back to Full, and perform a full backup. The Full
recovery model is the default when creating databases in SQL Server.
|
Working with Tables and Views
Tables
are the database objects that store data in a SQL Server database.
Tables are structured as columns and rows, like a spreadsheet. The
columns define the type and length of data they can store. Every table
must have at least one column. Column names must be unique within a
table; that is, you cannot specify ProductName column to appear twice
in the Product table. Tables store the underlying data within the .MDF
and .NDF data files as pages and extents. Columns are sometimes associated with constraints, for example, PRIMARY KEY, UNIQUE, or DEFAULT. Types of constraints will be explained later in this chapter. You can also mark columns with the following special attributes:
Identity Columns
Values for these columns are generated automatically in sequence by
incrementing every time a row is added. Usually, values 1, 2, 3, n are used, but you can define your own seed (starting value) and increment value for the identity column.
Computed Columns These columns do not store any data; instead, they define a formula that calculates the column value at query time.
Timestamp Columns These columns are used as a mechanism for version-stamping table rows and tracking changes.
Uniqueidentifier Columns These columns store Globally Unique Identifiers (GUID). GUID values are used for replication and are guaranteed to be unique.
When defining columns for a new or existing table, you can specify column nullibility.
A column is said to be nullible if it allows storing null (empty)
values. You can choose to mark a column as not nullible. If anyone
attempts to insert a NULL value into this column, an error will be raised, and the INSERT operation will fail.
Creating Tables
Tables can be created and modified using the SQL Server Management Studio table designer or the CREATE TABLE or ALTER TABLE
statements. To access the SQL Server Management Studio graphical table
designer, in Object Explorer expand the database in you wish to create
the table. Then, right-click Tables and click New Table. To modify an existing table, right-click it and then click Design.
The table designer shows the columns that will be created for your
table at the top and the properties of the selected column in the Column Properties pane, usually located at the bottom of the screen. Figure 2 shows the use of SQL Server Management Studio to create a new table.
To create a table using DDL, use the CREATE TABLE statement along with the syntax shown in Example 4.
Example 4. CREATE TABLE Statement-Syntax
CREATE TABLE [database_name].[schema_name].table_name
(column1_name data_type [NULL | NOT NULL] | [PRIMARY KEY] | [IDENTITY],
Column2_name data_type [NULL | NOT NULL],
[<computed_column_definition>]
|
In this statement, the table_name is the name of the table you wish to create. When defining columns, you can specify whether or not they will allow NULL values. You can also state that a column will be designated as the PRIMARY KEY for the table and whether it will contain automatically incrementing values, known as IDENTITY columns. The computed_column_
definition is the formula for a calculated column. When defining
columns, you must designate a data type, like varchar or int, and in
some cases a length.
Table 1 summarizes built-in data types that are available to you when you are defining columns.
Table 1. Built-In Data Types
Numeric | Character | Dates and Times | Others |
---|
Tinyint | Char | Datetime | Binary |
Smallint | Nchar | Smalldatetime | Bit |
Int | Varchar | Date | Cursor |
Bigint | Nvarchar | Datetime2 | Xml |
Smallmoney | Text | Datetimeoffet | Smalldatetime |
Money | Ntext | Time | Varbinary |
Decimal | | Timestamp | Uniqueidentifier |
Double | | | Hierarchyid |
Float | | | Rowversion |
Real | | | Sql_variant |
| | | Image |
Some
of the data types shown in the table also allow you to specify the
length or precision for the data stored in the column you are creating.
For example, a column of type char(1000) allows you to store up to 1000
characters per row. A column of type decimal(10) allows you to store up
to 10 digits on either side of the decimal point, while decimal(10,5)
allows you to store numbers of up to 10 digits with up to 5 digits to
the right of the decimal point. Variable-length data types, like
varchar, nvarchar, and varbinary, consume only the space that the
characters stored in the column take up. Fixed-length equivalents of
char, nchar, and binary consume a fixed amount of space regardless of
the amount of actual data contained in the column. Data types prefixed
with “n”—nvarchar and nchar—store Unicode text and can be used to store
characters from multiple languages in one column.
Creating User-Defined Data Types
Sometimes
you need to create your own data types that are based on the built-in
data types introduced earlier. Custom data types are also known as
user-defined data types (UDFs). UDFs are especially useful when you
must store the data with the same length or precision over and over
again. For example, you can create a new user-defined data type to
represent people’s names. This UDF can be based on nvarchar(50) and
cannot contain nulls. This UDF can now be bound to any column that is
to contain people’s names and will be consistent throughout. Create
your user-defined data types in the Model system database, so that it
is automatically inherited by all new databases you create.
User-defined data types are created using the CREATE TYPE statement. The syntax is shown in Example 5.
Example 5. CREATE TYPE Statement—Syntax
CREATE TYPE [schema_name.]type_name
{ FROM base_type([precision],[scale])
[NULL | NOT NULL]
}
|
Example 6
shows the syntax used to create a user-defined data type named
PersonName and to create a table that contains two columns of type
PersonName.
Example 6. Using the CREATE TYPE Statement
CREATE TYPE PersonName
{ FROM varchar(50)
NOT NULL
};
GO
CREATE TABLE TeamMembers
(MemberId int PRIMARY KEY,
MemberName PersonName,
ManagerName PersonName);
GO
|
Use the ALTER TYPE statement to change the definition of your user-defined types. The DROP TYPE
statement should be used to remove the user-defined data types you no
longer need in the database. You cannot remove user-defined types from
the database while there are tables with columns based on these types.
If you attempt to use the DROP TYPE
statement to remove a data type that is in use, you will get an error
message similar to: “Msg 3732, Level 16, State 1, Line 1. Cannot drop
type ‘PersonName’ because it is being referenced by object
‘TeamMembers’. There may be other objects that reference this type.”
Working with Constraints
Constraints
are data validation rules that are bound to a column or a set of
columns in a table. Constraints can also be used to enforce a
relationship between two entities represented as two tables. The
available types of constraints are as follows:
Check Constraints
These constraints validate the integrity of data in a column by
checking it against a valid comparison. For example, you can use a CHECKCHECK constraint to validate that an e-mail address is always at least seven characters long. constraint to ensure that no one in your Employees table has a Birth Date earlier than 01/01/1880. You can also use a
Primary Key Constraints PRIMARY KEY
constraints represent the unique identifier column that will enforce
the uniqueness of each row. For example, you can designate the
CustomerID column as the PRIMARY KEY for the Customers table. If you
get two customers that have the same values in the Name column and
other columns, but represent different people, you will use the primary
key to distinguish between them. It is a best practice to always have a
primary key in each table and to use surrogate primary keys that have
no meaning to the application.
Unique Constraints These constraints are similar to PRIMARY KEY
constraints, except that you can have more than one unique constraint
per table. For example, you can designate that the combination of
FirstName, LastName and TelephoneNumber is unique in the Customers
table and that the EmailAddress column can only contain unique values.
Foreign Key Constraints These constraints enforce a relationship between two tables. For example, you can use a FOREIGN KEY
constraint to specify that any row in the Orders table must have a
corresponding row in the Customers table, and that the tables are
linked through the CustomerID column, which is included in both tables.
Once this FOREIGN KEY constraint is enforced, you cannot delete a row from the Customers table that has related rows in the Orders table.
Default Constraints Also known as “defaults,” the DEFAULT
constraints specify a default value to be inserted into a column if no
value is inserted. Defaults can be bound to a column that is defined as
NULL or NOT NULL.
An example of a default is to use the value “Not Applicable” for the
ProductColour every time someone adds a product to the Products table
without specifying a color.
When
you attempt to insert, delete, or modify data in a table that will
result in a constraint violation, the statement will roll back. DML statements, like INSERT, UPDATE, DELETE, or MERGE, always succeed or fail as a whole. For example, if you were inserting 1000 records into a table, but one violated a PRIMARY KEY or UNIQUE constraint, all 1000 rows would roll back and nothing would be inserted. If a DELETE statement violates a FOREIGN KEY constraint, even on one row, the entire DELETE statement would fail and nothing would be deleted. You will never receive a partial result set from a DML statement. Example 7 shows the syntax used for working with constraints.
Tip
Remember that DML statements commit as a whole or not at all. A constraint violation will cause the entire statement to fail and roll back.
Example 7. Working with Constraints
CREATE TABLE Stars
(StarID int PRIMARY KEY,
StarName varchar(50) Unique,
SolarMass decimal(10,2) CHECK(SolarMass > 0),
StarType varchar(50) DEFAULT 'Orange Giant');
GO
INSERT Stars(StarID, StarName, SolarMass)
VALUES (1, 'Pollux', 1.86);
INSERT Stars(StarID, StarName, SolarMass, StarType)
VALUES (2, 'Sun', 1, 'Yellow dwarf');
SELECT * FROM Stars
-- Results:
-- StarID StarName SolarMass StarType
-- ------ -------- --------- ------------
-- 1 Pollux 1.86 Orange Giant
-- 2 Sun 1.00 Yellow dwarf
INSERT Stars(StarID, StarName, SolarMass, StarType)
VALUES (2, 'Deneb', 6, 'White supergiant');
-- Results:
-- Msg 2627, Level 14, State 1, Line 1
-- Violation of PRIMARY KEY constraint 'PK__Stars__06ABC647542C7691' Cannot insert duplicate key in object 'dbo.Stars'.
-- The statement has been terminated.
INSERT Stars(StarID, StarName, SolarMass, StarType)
VALUES (3, 'Deneb', -6, 'White supergiant');
-- Results:
-- Msg 547, Level 16, State 0, Line 1
-- The INSERT statement conflicted with the CHECK constraint "CK__ Stars__SolarMass__58F12BAE". The conflict occurred in database "AdventureWorks", table "dbo.Stars", column 'SolarMass'.
-- The statement has been terminated.
INSERT Stars(StarID, StarName, SolarMass, StarType)
VALUES (3, 'Deneb', 6, 'White supergiant');
SELECT * FROM Stars
-- Results:
--DROP TABLE Stars
-- StarID StarName SolarMass StarType
-- ------ -------- --------- ------------
-- 1 Pollux 1.86 Orange Giant
-- 2 Sun 1.00 Yellow dwarf
-- 3 Deneb 6.00 White supergiant
|
Creating Indexes
An
index is a lookup structure created on a table to optimize sort and
query performance. Indexes are created on a particular column or
columns, and store the data values for this column or columns in order.
When raw underlying table data is stored in no particular order, this
situation is referred to as a heap.
The heap is composed of multiple pages, with each page containing
multiple table rows. When raw underlying data is stored in order,
sorted by a column or columns, this situation is referred to as a clustered index.
For example, if you have a table named Customer, with a clustered index
on the FullName column, the rows in this table will be stored in order,
sorted by the full name. This means that when you are searching for a particular full name, the query optimizer component can execute the query more efficiently by performing an index lookup rather than a table scan. Only one clustered index is allowed per table; usually this is created on the column designated as the PRIMARY KEY.
You can also create additional nonclustered indexes
on a table that is stored either as a heap or as a clustered index. A
nonclustered index is a separate lookup structure that stores index
values in order, and with each index value, it stores a pointer to the
data page containing the row with this index value. Nonclustered
indexes speed up data retrieval. It makes sense to create nonclustered
indexes on all frequently searched on fields in a table. The trade-off
with indexes is write performance. Every time a new row is inserted,
the index must also be updated. When writing data to a table with
nonclustered indexes, sometimes the pages within the table have to be
rearranged to make room for the new values. In addition, indexes are
storage structures that take up disk space. Indexes are created using
the CREATE INDEX statement. Example 8 shows the syntax for creating an index.
Example 8. CREATE INDEX Statement—Syntax
CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name
ON table_or_view ( column1 [ ASC | DESC ], column2, ...n)
[ INCLUDE (additional_column_name, ...n) ]
[ WHERE filter_clause]
[ WITH OPTIONS]
|
The CREATE INDEX statement creates a clustered or nonclustered index on a specified column or columns. You can choose to create the index as UNIQUE, which will enforce a unique constraint on the index columns. A filter_clause
can be specified to create indexes only on a subset of data that meets
specific criteria. This is useful for a very large table, where
creating an index on all values of a particular column will be
impractical. Table 2 summarizes index options that can be used with the CREATE INDEX statement.
Table 2. Index Options
Option | Explanation |
---|
PAD_INDEX = ON | OFF | When
this option is ON, free space is allocated in each page of the index.
Allows for new values to be inserted without rearranging a large amount
of data. The amount of free space allocated is specified by the
FILLFACTOR parameter. When this option is OFF, enough free space for
one row is reserved in every page during index creation. |
FILLFACTOR = fill factor percentage | Specifies
the percentage of each page that should be filled up with data. For
example, a fill factor of 80 means 20% of each page will be empty and
available for new data. The fill factor is used only when you create or
rebuild an index.
|
SORT_IN_TEMPDB = ON | OFF | Specifies
whether the data should be sorted in the tempdb database instead of the
current database. This may give performance advantages if the tempdb
database is stored on a different disk to the current database. |
IGNORE_DUP_KEY = ON | OFF | Specifies that duplication errors should be ignored when creating unique indexes. |
STATISTICS_NORECOMPUTE = ON | OFF | Specifies that optimization statistics should not be updated at this time. |
DROP_EXISTING = ON | OFF | Specifies that the existing index with the same name should be dropped and then be re-created. This equates to an index rebuild. |
ONLINE = ON | OFF | Specifies
that the underlying table should remain online and accessible by users
while the index is being built. This option is only available in SQL
Server 2008 Enterprise or Developer edition. |
ALLOW_ROW_LOCKS = ON | OFF | Specifies whether locks should be held on each row, as necessary. |
ALLOW_PAGE_LOCKS = ON | OFF | Specifies whether locks should be held on each page, as necessary. |
MAXDOP = max_degree_of_parallelism | Specifies the maximum number of processors that are to be used during the rebuild operation. |
DATA_COMPRESSION = NONE | ROW | PAGE | Use data compression at row or page level of the index.
|
Example 9
creates a clustered index (by star name) and a nonclustered index (by
star type) on the Stars table we created in the previous example. Figure 3 IX_Star_Name can be created using the interface of SQL Server Management Studio. shows how the
Example 9. Working with Indexes
--Create the table specifying that the primary key index is to be created as nonclustered
CREATE TABLE Stars
(StarID int PRIMARY KEY NONCLUSTERED,
StarName varchar(50) Unique,
SolarMass decimal(10,2) CHECK(SolarMass > 0),
StarType varchar(50) DEFAULT 'Orange Giant');
GO
CREATE CLUSTERED INDEX Ix_Star_Name
ON Stars(StarName)
WITH (PAD_INDEX = ON,
FILLFACTOR = 70,
ONLINE = ON);
GO
CREATE NONCLUSTERED INDEX Ix_Star_Type
ON Stars(StarType)
WITH (PAD_INDEX = ON,
FILLFACTOR = 90);
GO
|
When you are creating a PRIMARY KEY constraint, an index on the column(s) designated as PRIMARY KEY
will be created automatically. This index will be clustered by default,
but this can be overridden when creating the index by specifying PRIMARY KEY NONCLUSTERED
option. As a best practice, it is recommended that you accept the
default of the clustered primary key column, unless you have a specific
reason to designate another column as the clustered index key. Usually,
the automatically created index is named PK_TableName_<Unique Number>, but this can be changed at any time by renaming the index. For example, a newly created Stars table with a PRIMARY KEY of StarID automatically has an index named UQ__Stars__A4B8A52A5CC1BC92.
Exam Warning
Remember that when creating a table, a unique index will be automatically created on the columns designated as the PRIMARY KEY.
If you wish to avoid the long rebuild time associated with building a
clustered index, or if you wish to create the clustered index on a
column different from the PRIMARY KEY, you must explicitly specify the PRIMARY KEY NONCLUSTERED option. The PRIMARY KEY will always be unique.
Working with Full-Text Indexes
Standard indexes are great when used with the simple WHERE clause of the SELECT
statement. An index will greatly reduce the time it will take you to
locate rows where the indexed column is equal to a certain value, or
when this column starts with a certain value. However, standard indexes
are inadequate for fulfilling more complex text-based queries. For
example, creating an index on StarType will not help you find all rows
where the StarType column contains the words “giant,” but not the words
“supermassive”.
To
fulfill these types of queries, you must use full-text indexes.
Full-text indexes are complex structures that consolidate the words
used in a column and their relative weight and position, and links
these words with the database page containing the actual data.
Full-text indexes are built using a dedicated component of SQL Server
2008—the Full-Text Engine.
In SQL Server 2005 and earlier, the Full-Text Engine was its own
service, known as full-text search. In SQL Server 2008, the Full-Text
Engine is part of the database engine (running as the SQL Server
Service).
Full-text
indexes can be stored on a separate filegroup. This can deliver
performance improvements, if this filegroup is hosted on a separate
disk from the rest of the database. Only one full-text index can be
created on a table, and it can only be created on a single, unique
column that does not allow null values. Full-text indexes must be based
on columns of type char, varchar, nchar, nvarchar, text, ntext, image,
xml, varbinary, and varbinary(max). You must specify a type column,
when creating a full-text index on a image, varbinary, or
varbinary(max) columns. The type column stores the file extension
(.docx, .pdf, .xlsx) of the document stored in the indexed column.
Example 10 amends the Stars table to include a Description column and creates a full-text index on this column. The FREETEXT
function allows us to search on any of the words specified using the
full-text index. This yields a similar user experience as using an
Internet search engine.
Example 10. Creating and Using a Full-Text Index
ALTER TABLE Stars
ADD Description ntext DEFAULT 'No description specified' NOT NULL ;
GO
CREATE FULLTEXT CATALOG FullTextCatalog AS DEFAULT;
CREATE FULLTEXT INDEX ON Stars(Description)
KEY INDEX PK__Stars__06ABC6465F9E293D;
GO
UPDATE Stars SET Description = 'Deneb is the brightest star in the constellation Cygnus and one of the vertices of the Summer Triangle. It is the 19th brightest star in the night sky, with an apparent magnitude of 1.25. A white supergiant, Deneb is also one of the most luminous stars known. It is, or has been, known by a number of other traditional names, including Arided and Aridif, but today these are almost entirely forgotten. Courtesy Wikipedia.'
WHERE StarName = 'Deneb';
UPDATE Stars SET Description = 'Pollux, also cataloged as Beta Geminorum, is an orange giant star approximately 34 light-years away in the constellation of Gemini (the Twins). Pollux is the brightest star in the constellation (brighter than Castor (Alpha Geminorum). As of 2006, Pollux was confirmed to have an extrasolar planet orbiting it. Courtesy Wikipedia.'
WHERE StarName = 'Pollux';
GO
SELECT StarName
FROM Stars
WHERE FREETEXT (Description, 'planet orbit, giant');
GO
-- Results:
-- StarName
-- ---------------------------------------------------
-- Pollux
|