Tuesday, October 11, 2011
OLAP vs OLTP
OLTP: On Line Transactional Processing
OLTP is transactional based systems
OLTP systems are used to handle large amounts of short online transactions.
Operations performed by OLTP are INSERT, UPDATE & DELETE
OLTP systems maintain data integrity and also provide fast query processing in environments having multiple accesses.
Transactions per second determine the efficiency of the system
Detailed data is stored in entity model usually in 3NF.
OLTP data bases in general stores transactional data.
OLAP: On Line Analytical Processing
OLAP is Analytical based systems.
Capable of analyzing large amounts of data
OLAP systems have less number of transactions compared to OLTP.
Response time determines the efficiency of the system.
Queries from this type system are more complex
Data mining techniques also applied on OLAP systems.
These systems store data in Star or snow flake schema in their databases.
Sunday, March 28, 2010
Performance Counters for SSAS 2008 Part-II
Storage Engine Query
I think that this is a really interesting category of counters to start with. Here you find information about the number of queries processed per second, the caching rate of the queries, the average time per query etc. There are really a lot of counters. So this category gives you a good overview of the servers' workload. For the beginning you might want to look at the following counters:
Object Counter Description
MSAS 2008:Storage Engine Query Current measure group queries Current number of measure group queries being actively worked on.
MSAS 2008:Storage Engine Query Measure group queries/sec Rate of measure group queries
MSAS 2008:Storage Engine Query Queries answered/sec Rate of queries answered.
MSAS 2008:Storage Engine Query Bytes sent/sec Rate of bytes sent by server to clients, in response to queries.
MSAS 2008:Storage Engine Query Queries from cache direct/sec Rate of queries answered from cache directly.
MSAS 2008:Storage Engine Query Queries from cache filtered/sec Rate of queries answered by filtering existing cache entry.
MSAS 2008:Storage Engine Query Queries from file/sec Rate of queries answered from files.
MSAS 2008:Storage Engine Query Avg time/query Average time per query, in milliseconds. Response time based on queries answered since the last counter measurement.
MSAS 2008:Storage Engine Query Dimension cache lookups/sec Rate of dimension cache lookups.
MSAS 2008:Storage Engine Query Dimension cache hits/sec Rate of dimension cache hits.
MSAS 2008:Storage Engine Query Measure group cache lookups/sec Rate of measure group cache lookups.
MSAS 2008:Storage Engine Query Measure group cache hits/sec Rate of measure group cache hits.
MSAS 2008:Storage Engine Query Aggregation lookups/sec Rate of aggregation lookups.
MSAS 2008:Storage Engine Query Aggregation hits/sec Rate of aggregation hits.
Connections
This category gives information about the number of connections, sessions and request which is also important to understand the workload and to see when bottlenecks occur.
Object Counter Description
MSAS 2008:Connection Current connections Current number of client connections established.
MSAS 2008:Connection Requests/sec Rate of connection requests. These are arrivals.
MSAS 2008:Connection Current user sessions Current number of user sessions established.
MDX
There are really a lot of counters about MDX. Just to name a few of them:
Object Counter Description
MSAS 2008:MDX Number of cell-by-cell evaluation nodes Total number of cell-by-cell evaluation nodes built by MDX execution plans
MSAS 2008:MDX Number of bulk-mode evaluation nodes Total number of bulk-mode evaluation nodes built by MDX execution plans
MSAS 2008:MDX Total cells calculated Total number of cell properties calculated
Memory
Memory is always important. Here you can also query the amount of memory being allocated by the aggregation cache.
Object Counter Description
MSAS 2008:Memory Memory Usage KB Memory usage of the server process. Same as perfmon counter for Process\PrivateBytes.
MSAS 2008:Memory AggCacheKB Current memory allocated to aggregation cache, in KB.
MSAS 2008:Memory Quota KB Current memory quota, in KB. Memory quota is also known as a memory grant or memory reservation.
MSAS 2008:Memory Quota Blocked Current number of quota requests that are blocked until other memory quotas are freed.
Aggregations
If your cubes rely on aggregations it might be interesting to know if they can be held in memory or if the are written to a temporary file. So you might also want to look at the following aggregation counters:
Object Counter Description
MSAS 2008:Proc Aggregations Temp File Bytes Writes/sec Usage of temporary file
MSAS 2008:Proc Aggregations Current partitions Current number of partitions being processed.
MSAS 2008:Proc Aggregations Memory size bytes Size of current aggregations in memory. This count is an estimate.
MSAS 2008:Proc Aggregations Temp file bytes written/sec Rate of writing bytes to a temporary file. Temporary files are written when aggregations exceed memory limits.
Processing
Processing time is also very important when considering performance especially when you're processing your cube regularly over the day while users are also making their queries. Knowing about the processings may also make the exploration of performance issues more easy. If you're just looking at the average query time for instance, you should also check what the server does in the meantime. Object Counter Description
MSAS 2008:Processing Rows read/sec Rate of rows read from all relational databases.
MSAS 2008:Processing Total rows read Count of rows read from all relational databases.
MSAS 2008:Processing Rows converted/sec Rate of rows converted during processing.
MSAS 2008:Processing Total rows converted Count of rows converted during processing.
MSAS 2008:Processing Rows written/sec Rate of rows written during processing.
MSAS 2008:Processing Total rows written Count of rows written during processing.
Threads
Object Counter Description
MSAS 2008:Threads Query pool job queue length Nonzero values means that there are more queries than query threads. You may increase the number of threads (but only if CPU utilization is not too high because otherwise this would only result in more context switches and degrade performance)
MSAS 2008:Threads Query pool busy threads The number of busy threads in the query thread pool
MSAS 2008:Threads Query pool idle threads The number of idle threads in the query thread pool
I completely left out the counters for Caches, Datamining, Locks, Indexes and Proactive Caching here but these are important too and after investigating the above counters you will want to look at the more detailed counters as well. Just check Vidas' blog post for the complete list.
Performance Counters for SSAS 2008 Part -1
MSOLAP: Processing
Rows read/sec
MSOLAP: Proc Aggregations
Temp File Bytes Written/sec
Rows created/Sec
Current Partitions
MSOLAP: Threads
Processing pool idle threads (I sometimes find "query pool idle threads" more significant, maybe you want to monitor both)
Processing pool job queue length (I sometimes find "query pool job queue length" more significant, maybe you want to monitor both)
Processing pool busy threads (I sometimes find "query pool busy threads" more significant, maybe you want to monitor both)
SQL-Server: Memory Manager
Total Server Memory
Target Server Memory
Process
Virtual Bytes – msmdsrv.exe
Working Set – msmdsrv.exe
Private Bytes – msmdsrv.exe
% Processor Time – msmdsrv.exe and sqlservr.exe
Logical Disk:
Avg. Disk sec/Transfer – All Instances
Processor:
% Processor Time – Total
System:
Context Switches / sec
So it should be clear that you need to monitor the server machine comprehensively and not only the Analysis Services Process.
However, I picked some SSAS counters from the list that are a good one to start your exploration when focusing on the SSAS processes
Tuesday, March 24, 2009
SQL Server Optimization Tips
General Tips:
Try to restrict the queries result set by returning only the particular columns from the table, not all table's columns. This can results in good performance benefits, because SQL Server will return to client only particular columns, not all table's columns. This can reduce network traffic and boost the overall performance of the query.
Try to avoid using SQL Server cursors, whenever possible. SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated subquery or derived tables, if you need to perform row-by-row operations.
If you need to return the total table's row count, you can use alternative way instead of SELECT COUNT (*) statement. Because SELECT COUNT (*) statement make a full table scan to return the total table's row count, it can take very many time for the large table. There is another way to determine the total row count in a table. You can use sysindexes system table, in this case. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT (*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID ('table_name') AND indid <>Use table variables instead of temporary tables. Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible. The table variables are available in SQL Server 2000 only.
Try to avoid using the DISTINCT clause, whenever possible. Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary.
Include SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement. This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows. This can improve performance of your queries, because the smaller result set will be returned. This can also reduce the traffic between the server and the clients.
Try to use UNION ALL statement instead of UNION, whenever possible. The UNION ALL statement is much faster than UNION, because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether or not they exist.
Try to use constraints instead of triggers, whenever possible. Constraints are much more efficient than triggers and can boost performance. So, you should use constraints instead of triggers, whenever possible.
Use user-defined functions to encapsulate code for reuse. The user-defined functions (UDFs) contain one or more Transact-SQL statements that can be used to encapsulate code for reuse. Using UDFs can reduce network traffic.
You can specify whether the index keys are stored in ascending or descending order. For example, using the CREATE INDEX statement with the DESC option (descending order) can increase the speed of queries, which return rows in the descending order. By default, the ascending order is used.
If you need to delete all tables’ rows, consider using TRUNCATE TABLE instead of DELETE command. Using the TRUNCATE TABLE is much fast way to delete all tables’ rows, because it removes all rows from a table without logging the individual row deletes.
Don't use Enterprise Manager to access remote servers over a slow link or to maintain very large databases. Because using Enterprise Manager is very resource expensive, use stored procedures and T-SQL statements, in this case.
Use SQL Server cursors to allow your application to fetch a small subset of rows instead of fetching all tables’ rows. SQL Server cursors allow application to fetch any block of rows from the result set, including the next n rows, the previous n rows, or n rows starting at a certain row number in the result set. Using SQL Server cursors can reduce network traffic because the smaller result set will be returned.
Try to use constraints instead of triggers, rules, and defaults whenever possible. Constraints are much more efficient than triggers and can boost performance. Constraints are more consistent and reliable in comparison to triggers, rules and defaults, because you can make errors when you write your own code to perform the same actions as the constraints.
Use char/varchar columns instead of nchar/nvarchar if you do not need to store Unicode data. The char/varchar value uses only one byte to store one character; the nchar/nvarchar value uses two bytes to store one character, so the char/varchar columns use two times less space to store data in comparison with nchar/nvarchar columns.
If you work with SQL Server 2000, use cascading referential integrity constraints instead of triggers whenever possible. For example, if you need to make cascading deletes or updates, specify the ON DELETE or ON UPDATE clause in the REFERENCES clause of the CREATE TABLE or ALTER TABLE statements. The cascading referential integrity constraints are much more efficient than triggers and can boost performance.
Tips for designing Stored Procedures :
Use stored procedures instead of heavy-duty queries. This can reduce network traffic, because your client will send to server only stored procedure name (perhaps with some parameters) instead of large heavy-duty queries text. Stored procedures can be used to enhance security and conceal underlying data objects also. For example, you can give the users permission to execute the stored procedure to work with the restricted set of the columns and data.
Call stored procedure using its fully qualified name. The complete name of an object consists of four identifiers: the server name, database name, owner name, and object name. An object name that specifies all four parts is known as a fully qualified name. Using fully qualified names eliminates any confusion about which stored procedure you want to run and can boost performance because SQL Server has a better chance to reuse the stored procedures execution plans if they were executed using fully qualified names
Consider returning the integer value as an RETURN statement instead of an integer value as part of a recordset. The RETURN statement exits unconditionally from a stored procedure, so the statements following RETURN are not executed. Though the RETURN statement is generally used for error checking, you can use this statement to return an integer value for any other reason. Using RETURN statement can boost performance because SQL Server will not create a recordset.
Don't use the prefix "sp_" in the stored procedure name if you need to create a stored procedure to run in a database other than the master database. The prefix "sp_" is used in the system stored procedures names. Microsoft does not recommend using the prefix "sp_" in the user-created stored procedure name, because SQL Server always looks for a stored procedure beginning with "sp_" in the following order: the master database, the stored procedure based on the fully qualified name provided, the stored procedure using dbo as the owner, if one is not specified. So, when you have the stored procedure with the prefix "sp_" in the database other than master, the master database is always checked first, and if the user-created stored procedure has the same name as a system stored procedure, the user-created stored procedure will never be executed.
Use the sp_executesql stored procedure instead of the EXECUTE statement. The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improve readability of your code when there are many parameters are used. When you use the sp_executesql stored procedure to execute a Transact-SQL statement that will be reused many times, the SQL Server query optimizer will reuse the execution plan it generates for the first execution when the change in parameter values to the statement is the only variation.
Do not forget to close SQL Server cursor when its result set is not needed. To close SQL Server cursor, you can use CLOSE {cursor_name} command. This command releases the cursor result set and frees any cursor locks held on the rows on which the cursor is positioned.
Do not forget to deallocate SQL Server cursor when the data structures comprising the cursor are not needed. To deallocate SQL Server cursor, you can use DEALLOCATE {cursor_name} command. This command removes a cursor reference and releases the data structures comprising the cursor.
Try to reduce the number of columns to process in the cursor. Include in the cursor's select statement only necessary columns. It will reduce the cursor result set. So, the cursor will use fewer resources. It can increase cursor performance and reduce SQL Server overhead.
Use READ ONLY cursors, whenever possible, instead of updatable cursors. Because using cursors can reduce concurrency and lead to unnecessary locking, try to use READ ONLY cursors, if you do not need to update cursor result set.
Try avoiding using insensitive, static and keyset cursors, whenever possible. These types of cursor produce the largest amount of overhead on SQL Server, because they cause a temporary table to be created in TEMPDB, which results in some performance degradation.
Use FAST_FORWARD cursors, whenever possible. The FAST_FORWARD cursors produce the least amount of overhead on SQL Server, because there are read-only cursors and can only be scrolled from the first to the last row. Use FAST_FORWARD cursor if you do not need to update cursor result set and the FETCH NEXT will be the only used fetch option.
Use FORWARD_ONLY cursors, if you need updatable cursor and the FETCH NEXT will be the only used fetch option. If you need read-only cursor and the FETCH NEXT will be the only used fetch option, try to use FAST_FORWARD cursor instead of FORWARD_ONLY cursor. By the way, if one of the FAST_FORWARD or FORWARD_ONLY is specified the other cannot be specified.
Consider creating index on column(s) frequently used in the WHERE, ORDER BY, and GROUP BY clauses. These column(s) are best candidates for index creating. You should analyze your queries very attentively to avoid creating not useful indexes.
Drop indexes that are not used. Because each index take up disk space and slow the adding, deleting, and updating of rows, you should drop indexes that are not used. You can use Index Wizard to identify indexes that are not used in your queries.
Try to create indexes on columns that have integer values rather than character values. Because the integer values usually have less size then the characters values size (the size of the int data type is 4 bytes, the size of the bigint data type is 8 bytes), you can reduce the number of index pages which are used to store the index keys. This reduces the number of reads required to read the index and boost overall index performance.
Limit the number of indexes, if your application updates data very frequently. Because each index take up disk space and slow the adding, deleting, and updating of rows, you should create new indexes only after analyze the uses of the data, the types and frequencies of queries performed, and how your queries will use the new indexes. In many cases, the speed advantages of creating the new indexes outweigh the disadvantages of additional space used and slowly rows modification. However, avoid using redundant indexes; create them only when it is necessary. For read-only table, the number of indexes can be increased.
Check that index you tried to create does not already exist. Keep in mind that when you create primary key constraint or unique key constraints SQL Server automatically creates index on the column(s) participate in these constraints. If you specify another index name, you can create the indexes on the same column(s) again and again.
Create clustered index instead of nonclustered to increase performance of the queries that return a range of values and for the queries that contain the GROUP BY or ORDER BY clauses and return the sort results. Because every table can have only one clustered index, you should choose the column(s) for this index very carefully. Try to analyze all your queries, choose most frequently used queries and include into the clustered index only those column(s), which provide the most performance benefits from the clustered index creation.
Create nonclustered indexes to increase performance of the queries that return few rows and where the index has good selectivity. In comparison with a clustered index, which can be only one for each table, each table can have as many as 249 nonclustered indexes. However, you should consider nonclustered index creation as carefully as the clustered index, because each index take up disk space and drag on data modification.
Avoid creating a clustered index based on an incrementing key. For example, if a table has surrogate integer primary key declared as IDENTITY and the clustered index was created on this column, then every time data is inserted into this table, the rows will be added to the end of the table. When many rows will be added a "hot spot" can occur. A "hot spot" occurs when many queries try to read or write data in the same area at the same time. A "hot spot" results in I/O bottleneck.
Note. By default, SQL Server creates clustered index for the primary key constraint. So, in this case, you should explicitly specify NONCLUSTERED keyword to indicate that a nonclustered index is created for the primary key constraint.
Create a clustered index for each table. If you create a table without clustered index, the data rows will not be stored in any particular order. This structure is called a heap. Every time data is inserted into this table, the row will be added to the end of the table. When many rows will be added a "hot spot" can occur. To avoid "hot spot" and improve concurrency, you should create a clustered index for each table.
If you create a composite (multi-column) index, try to order the columns in the key so that the WHERE clauses of the frequently used queries match the column(s) that are leftmost in the index. The order of the columns in a composite (multi-column) index is very important. The index will be used to evaluate a query only if the leftmost index key's column are specified in the WHERE clause of the query. For example, if you create composite index such as "Name, Age", then the query with the WHERE clause such as "WHERE Name = 'Alex'" will use the index, but the query with the WHERE clause such as "WHERE Age = 28" will not use the index.
If you need to join several tables very frequently, consider creating index on the joined columns. This can significantly improve performance of the queries against the joined tables.
If your application will perform the same query over and over on the same table, consider creating a covering index including columns from this query. A covering index is an index, which includes all of the columns referenced in the query. So the creating covering index can improve performance because all the data for the query is contained within the index itself and only the index pages, not the data pages, will be used to retrieve the data. Covering indexes can bring a lot of performance to a query, because it can save a huge amount of I/O operations.
Use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables" trace to determine which tables in your database may need indexes.
This trace will show which tables are being scanned by queries instead of using an index.
Saturday, January 3, 2009
Visual Studio 2008 - New Features
Introduction
Visual Studio 2008 code name "Orcas" Beta 2 has just hit the road and, since it is Beta 2, this means Visual Studio 2008 is feature complete and is ready for RTM. Below, we would find a brief introduction of some of the new features introduced with VS 2008 and .NET 3.5 Beta 2.
A quick list of some of the new features are:
Multi-Targeting support
Web Designer and CSS support
ASP.NET AJAX and JavaScript support
Project Designer
Data
LINQ – Language Integrated Query
The features listed and explained in this paper are not complete and this document intends to give you a forehand to start off with VS 2008.
1. Multi-Targeting Support
Earlier, each Visual Studio release only supported a specific version of the .NET Framework. For example, VS 2003 only works with .NET 1.1, and VS 2005 only works with .NET 2.0.
One of the major changes with the VS 2008 release is to support what Microsoft calls "Multi-Targeting". This means that Visual Studio will now support targeting multiple versions of the .NET Framework, and developers will be able to take advantage of the new features that Visual Studio provides without having to migrate their existing projects and deployed applications to use a new version of the .NET Framework.
Now when we open an existing project or create a new one with VS 2008, we can pick which version of the .NET Framework to work with. The IDE will update its compilers and feature-set to match the chosen .NET Framework.
Features, controls, projects, item-templates, and references that do not work with the selected version of the Framework will be made unavailable or will be hidden.
Unfortunately, support has not been included to work with Framework versions 1.1 and earlier. The present release supports 2.0/3.0 and 3.5 .NET Frameworks.
Microsoft plans to continue multi-targeting support in all future releases of Visual Studio.
Creating a New Project with Visual Studio 2008 that Targets .NET 2.0 Framework Library
The screenshots below depict the creation of a new web application targeting .NET 2.0 Framework. Choose File->New Project. As we see in the snapshot below in the top-right of the new project dialog, there is now a dropdown that allows us to choose which versions of the .NET Framework we want to target when we create the new project. The templates available are filtered depending on the version of the Framework chosen from the dropdown:
Can I Upgrade an Existing Project to .NET 3.5?
When we open a solution created using an older version of Visual Studio and Framework, VS 2008 would ask if migration is required. If we opt to migrate, then a migration wizard would start. If we wish to upgrade our project to target a newer version of the Framework at a later point of time, we can pull up the project properties page and choose the Target Framework. The required assemblies are automatically referenced. The snapshot below shows the properties page with the option Target Framework marked.
2. Web Designer, Editing and CSS Support
One feature that web developers will discover with VS 2008 is its drastically improved HTML designer, and the extensive CSS support made available.
The snapshots below depict some of the new web designer features in-built into VS 2008.
Split View Editing
In addition to the existing views, Design view and Code view, VS 2008 brings along the Split view which allows us to view both the HTML source and the Design View at the same-time, and easily make changes in any of the views. As shown in the image below, as we select a tag in code view, the corresponding elements/controls are selected in design view.
CSS Style Manager
VS 2008 introduces a new tool inside the IDE called "Manage Styles". This shows all of the CSS style sheets for the page.
It can be used when we are in any of the views - design, code and split views. Manage Styles tool can be activated by choosing Format -> CSS Styles -> Manage Styles from the menu. A snapshot of the same would look like the following:
Create a new style using the new style dialog window as show in the snapshot below.
Now, the style manager would show .labelcaption style as well in the CSS styles list. However, if we observe that the body element has a circle around it but the .labelcaption does not have one, this is because the style is not in use yet.
We will not select all the labels below and apply our new style .labelcaption.
We can choose to modify the existing style through GUI using "Modify style..." menu option in the dropdown menu as shown above or choose to hand edit the code by choosing the option "Go To Code".
CSS Source View Intellisense
The designer is equipped with the ability to select an element or control in design-view, and graphically select a rule from the CSS list to apply to it.
We will also find when in source mode that we now have intellisense support for specifying CSS class rules. The CSS Intellisense is supported in both regular ASP.NET pages as well as when working with pages based on master pages.
Code Editing Enhancements
Below is a non-exhaustive list of a few new code editing improvements. There are many more about which I don't know yet.
Transparent Intellisense Mode
While using VS 2005/2003 we often find ourselves escaping out of intellisense in order to better see the code around, and then go back and complete what we were doing.
VS 2008 provides a new feature which allows us to quickly make the intellisense drop-down list semi-transparent. Just hold down the "Ctrl" key while the intellisense drop-down is visible and we will be able to switch it into a transparent mode that enables us to look at the code beneath without having to escape out of Intellisense. The screenshot below depicts the same.
Organize C# Using Statements
One of the small, but a nice new feature in VS 2008 is support for better organizing using statements in C#. We can now select a list of using statements, right-click, and then select the "Organize Usings" sub-menu. When we use this command the IDE will analyze what types are used in the code file, and will automatically remove those namespaces that are declared but not required. A small and handy feature for code refactoring.
3. ASP.NET AJAX and JavaScript Support
JavaScript Intellisense
One new feature that developers will find with VS 2008 is its built-in support for JavaScript Intellisense. This makes using JavaScript and building AJAX applications significantly easier. A double click on HTML control in design mode would automatically create a click event to the button and would create the basic skeleton of the JavaScript function. As we see in the depicted image below, JavaScript Intellisense is inbuilt now. Other JavaScript Intellisense features include Intellisense for external JavaScript libraries and adding Intellisense hints to JavaScript functions.
JavaScript Debugging
One new JavaScript feature in VS 2008 is the much-improved support for JavaScript debugging. This makes debugging AJAX applications significantly easier. JavaScript debugging was made available in VS 2005 itself. However, we had to run the web application first to set the breakpoint or use the "debugger" JavaScript statement.
VS 2008 makes this much better by adding new support that allows us to set client-side JavaScript breakpoints directly within your server-side .aspx and .master source files.
We can now set both client-side JavaScript breakpoints and VB/C# server-side breakpoints at the same time on the same page and use a single debugger to step through both the server-side and client-side code in a single debug session. This feature is extremely useful for AJAX applications. The breakpoints are fully supported in external JavaScript libraries as well.
4. Few Other Features and Enhancements
Below is a list of few other enhancements and new features included in Microsoft Visual Studio 2008.
Project Designer
Windows Presentation Foundation (WPF) applications have been added to Visual Studio 2008. There are four WPF project types:
WinFX Windows Application
WinFX Web Browser Application
WinFX Custom Control Library
WinFX Service Library
When a WPF project is loaded in the IDE, the user interface of the Project Designer pages lets us specify properties specific to WPF applications.
Data
Microsoft Visual Studio 2008 Beta 2 includes the following new features to incorporate data into applications:
The Object Relational Designer (O/R Designer) assists developers in creating and editing the objects (LINQ to SQL entities) that map between an application and a remote database
Hierarchical update capabilities in Dataset Designer, providing generated code that includes the save logic required to maintain referential integrity between related tables
Local database caching incorporates an SQL Server Compact 3.5 database into an application and configures it to periodically synchronize the data with a remote database on a server. Local database caching enables applications to reduce the number of round trips between the application and a database server
LINQ – Language Integrated Query
LINQ is a new feature in VS 2008 that broadens great querying capabilities into the language syntax. LINQ introduces patterns for querying and updating data. A set of new assemblies are provided that enable the use of LINQ with collections, SQL databases, and XML documents.
Visual Studio 2008 Debugger
The Visual Studio 2008 debugger has been enhanced with the following features:
Remote debugging support on Windows Vista
Improved support for debugging multithreaded applications
Debugging support for LINQ programming
Debugging support for Windows Communications Foundation
Support for script debugging, including client-side script files generated from server-side script now appear in Solution Explorer
Reporting
Visual Studio 2008 provides several new reporting features and improvements such as:
New Report Projects: Visual Studio 2008 includes two new project templates for creating reporting applications. When we create a new Reports Application project, Visual Studio provides a report (.rdlc) and a form with a ReportViewer control bound to the report.
Report Wizard: Visual Studio 2008 introduces a Report Wizard, which guides us through the steps to create a basic report. After we complete the wizard, we can enhance the report by using Report Designer.
Expression Editor Enhancement: The Expression Editor now provides expressions that we can use directly or customize as required.
PDF Compression: The ReportViewer controls can now compress reports that are rendered or exported to the PDF format.