When in Geneva Server Beta 2 you try to configure and use Windows Integrated Authentication, it may not prompt for credentials and instead you receive an error message:

  • When connecting from another computer, you may see the following error message: 
    Access is denied due to invalid credentials
  • And when you connect from the same server, the more detailed error is displayed:
    HTTP Error 401.2 - Unauthorized You are not authorized to view this page due to invalid authentication headers

The error messages indicate that Windows Authentication is disabled, but when you check the IIS configuration it shows as if it were enabled:
image

So, where is the error?

In IIS 7 the overall configuration file is stored in C:\Windows\System32\inetsrv\config\applicationHost.config. The file contains some configurations that apply to the whole server, some configurations that apply to each site, and some that apply to a specific path. If you scroll down the file until you see the "FederationPassive" location configuration, you’ll see that Windows Authentication is disabled. That is OK except that it is also removing the authentication providers, so no child location can use Windows Authentication without them!

 image

Solution

Open applicationHost.config in a text editor. Find the <location> tag related to Windows Integrated Authentication and modify it to look like the following:

image

Configuration section

Below you’ll find the complete configuration section that you can copy and paste onto your applicationHost.config file.

<location path="Default Web Site/FederationPassive/auth/integrated" >
    <system.webServer>
        <security>
            <authentication>
                <anonymousAuthentication enabled="false" />
                <windowsAuthentication enabled="true"
                       
useKernelMode="true" useAppPoolCredentials="true">
                    <providers>
                        <add value="Negotiate" />
                        <add value="NTLM" />
                    </providers>
                </windowsAuthentication>
            </authentication>
        </security>
        <handlers accessPolicy="Read, Script" />
    </system.webServer>
</location>

When you save the file, IIS will automatically reload the configuration, so you don’t need to restart any service.

Hope this helps!

When modeling a Web Application’s responsiveness with an increasing User Load, you usually configure a Load Test Mix with a Step based Load Pattern, run the test until the Response Time goes above a certain response time, and then manually stop the test. This requires some of your time monitoring the test progress and having to intervene to stop the test.

 image

Wouldn’t be nice if this was automated? Fortunately this can be easily achieved through the use of a custom Load Test Plugin (ILoadTestPlugin) and a configured Threshold. (Custom load test plugins were improved in VS2008SP1 and are also available in VS 2010, used in the creation of this blog post.) This will save you (the Tester) the trouble of having to manually monitor the load test and stop it when the threshold is exceeded.
image

Let’s take a look at how this can be implemented in code:

Creating a Load Test Plug-in that stops the Load Test when a threshold is exceeded

Load Test Plug-ins allow custom code to be executed when a load test starts, ends, or when a threshold was exceeded, among other events. We’ll create a Custom Load Test Plug-in that attaches itself to the ThresholdExceeded event and stops the test.

  1. Create a Class Library project.
  2. Add a reference to the Microsoft.VisualStudio.QualityTools.LoadTestFramework assembly.
  3. Create a class named CustomLoadTestPlugin and complete it with the following code (VS2008 and VS2010):

    using System;
    using Microsoft.VisualStudio.TestTools.LoadTesting;
    
    public class CustomLoadTestPlugin : ILoadTestPlugin
    {
        private LoadTest test;
    
        public void Initialize(LoadTest loadTest)
        {
            if (loadTest == null)
            {
                throw new ArgumentNullException("loadTest");
            }
    
            this.test = loadTest;
    
            loadTest.ThresholdExceeded += new EventHandler<ThresholdExceededEventArgs>(OnThresholdExceeded);
        }
    
        void OnThresholdExceeded(object sender, ThresholdExceededEventArgs e)
        {
            if (e.ThresholdResult == ThresholdRuleResult.Critical
                && e.CounterValue != null
                && e.CounterValue.CounterName == "Avg. Test Time")
            {
                this.test.Scenarios[0].CurrentLoad = 0;
                this.test.Abort();
            }
        }
    }

  4. Build the Load Test Plug-in solution

Note: If you’d like to get more information about creating a Load Test Plug-in, see the following link.

Using the Custom Load Test Plug-in

  1. Open the Load Test Configuration (.loadtest) you’d like to modify.
  2. Using Solution Explorer, add a reference to the Load Test Plug-In (either to the Load Test Plug-In project or to the compiled assembly).
  3. Add a Compare Constant threshold rule on the Avg. Test Time counter. To do this, Expand the Counter Sets | LoadTest | Counter Categories | LoadTest:Test element, right click on the Avg. Test Time counter and select Add Threshold Rule.image
  4. Add the Load Test Plug-In to the test run. To do this, right click on the Load Test Name (the top element) and choose Add Load Test Plug-in. On the window that opens, select CustomLoadTestPlugin and hit OK.

    image

  5. That’s it, you should now run the Load Test and see if it stops after your configured threshold.

Hope this saves you time. If you’d like to learn more ways to extend Visual Studio Load Testing, check Bill Barnett’s  article.

Related Articles

Happy testing!

kick it on DotNetKicks.com

Let’s assume you often run Performance Tests and your application outputs a log file that you manually backup on every test run. You can automate the process using a Collector, a new feature that ships with Visual Studio 2010.

Note: For a great introduction on Data Collectors in Visual Studio 2010 Beta1, please check Amit’s blog.

This post is meant to answer the question: How can I create my own Data Collector in Visual Studio 2010?

Steps to create a Performance Collector

Let’s create a Performance Collector that generates a log file and copies it to the Test Results folder:

  1. Create a Class Library Project (C# 4.0).
  2. Add a reference to Microsoft.VisualStudio.PerformanceTools.DataCollection and Microsoft.VisualStudio.QualityTools.ExecutionCommon. Both assemblies can be found in the %DevEnvDir%\PrivateAssemblies folder, and should only be used for extending Visual Studio.
    image
  3. Create a new class. Assign it a name, like CustomLogDataCollector.
  4. Add the following using declarations above the class name.
    image
  5. Make the class inherit from DataCollector (Microsoft.VisualStudio.PerformanceTools.DataCollection namespace).
  6. Decorate the class name with the DataCollectorFriendlyName, DataCollectorTypeUri and DataCollectorDescription attributes. The Uri needs to be unique for Visual Studio to identify the Collector, it is usually created in a Company/CollectorName/Version hierarchy, as shown below.
    image
  7. Add a property of type IDataCollectorSink and implement the DoInitialize method to capture the IDataCollectorSink value. An IDataCollectorSink allows the Collector to interact with the Test Results folder.
    image
  8. Override the SessionEnd method and invoke the IDataCollectionSync’s SendFileAsync method to copy the log file contents into the Test Results folder. In the example below I’m simulating a log file by creating a HelloWorld.txt file with a log line. It’s very easy to adapt this to your own environment!
     image
  9. Build the project. Copy the project’s DLL into the %DevEnvDir%\PrivateAssemblies\DataCollectors folder.

That’s it! Now it’s time to enjoy your first Custom Collector. 

Using the custom Data Collector in your Test Project

To use the Data Collector for your Test Project you just need to:

  1. In Solution Explorer, double click on the “.testsettings” file (the project’s test run configuration) to open the configuration editor.
     image
  2. Select the Execution Criteria configuration from the list on the left. In the Collectors section on the right, scroll down until you see “Your custom collector1″. Make sure the Enabled checkbox is selected and click Apply and then Close. image
  3. Start a Load Test run.
    image
  4. Wait until the Load Test run is completed. On the Test Results window, click the Test run completed link.
    image
  5. Verify that there is a file named HelloWorld.txt in the Collected Files section, as shown below. Clicking the link will open the file in your configured text editor.
    image

Next steps

This was a quick introduction on how to create custom Collectors for your Test Projects. Now that you have it working, you can augment it by creating:

  • An installer that copies the DLL file to PrivateAssemblies\DataCollectors folder
  • A class to hold the Collector’s configuration
  • A visual editor for the Data Collector’s options

I hope you can find a good use for this, and your feedback is greatly appreciated.

On my previous posts I wrote about what the Semantic Web is, and how to design your information to be stored on a Semantic Web system. Now it’s time to get the juice out of the data, by using the SPARQL query language, which was designed specifically for this purpose.

Thinking in triples

When doing SPARQL queries, you need to think in triples. For example, if you’d like to query for all persons that have a pet whose name is “Duke”, you could write the following query.

image

You may wonder what are the different parts of the SPARQL query, and how to write one query of our own. We’ll get to that later.

Triple patterns can be chained to each other, to form complex queries. In the first line of the example above, I’m saying “bring me the Name of a person“; when chained to the second pattern, I’m saying “The person ‘?person’ should be the owner of a pet” (note: in this context the only usage of custom:Owner is the one I’m talking about, otherwise I’d have to specify that ?pet is of type Pet) so “Bring me the name of a person that is owner of a pet“. As you may have already guessed, when adding the third pattern, I’m saying “and also, the pet’s name should be ‘Duke'”.

Note: To enhance the potential of triple patterns, SPARQL has the  FILTER, OPTIONAL and UNION keywords, to have a granular control on which triples are returned.

Namespaces and Ontologies

image

Even if I haven’t written about ontologies yet, you can think of them as namespaces, that group similar concepts in a certain context. It is not the same to say “Ink” in a printer store than in a digital tablet concept. Also, certain assumptions or inferences apply just to one context; following the example, the printer ink wears out where the digital ink does not.

Generally, an ontology is related one to one with a namespace.

image

Namespaces in SPARQL work as a shortcut to avoid typing in the full namespace of an element. It saves much time when typing custom:Name instead of http://mySite/MySchema/Name, and also it helps improve readability. In most Semantic Web Software (like IMM), standard namespaces are automatically added by the engine to avoid repeating the prefixes.

Query output

image

On the query above, the “output columns” is a comma-separated list of values that are returned as columns. All of the SELECT variables must appear on the WHERE part, but the opposite is not true: there is no need to list every WHERE variable on the SELECT part.

Note: In SPARQL, there are 4 ways to perform queries: SELECT, CONSTRUCT, ASK and DESCRIBE. SELECT is the most common for simple queries; it returns a table whose fields are the values obtained from matching (binding) the triple patterns in the WHERE part. CONSTRUCT and DESCRIBE return an RDF graph that is built from matching the triple patterns in the WHERE part; CONSTRUCT is conceptually similar to the views in relational database: DESCRIBE is used to retrieve additional details about the matched triples. Finally, ASK is for doing boolean queries and works very similar to the Exists function in relational databases.

More information

If you’d like to learn more about SPARQL and ontologies, the following links may come in handy:

For examples on advanced queries, you can visit the following sites:

On my previous article, I wrote about what the Semantic Web is without actually showing how the information is transmitted and stored. This article assumes that you are a technical savvy person, and you would like to know how to design your applications for the Semantic Web.

When you design a system in an Object Oriented way, you think in entities within a domain model. These business objects are usually mapped to Database rows, which are stored in tables. These database tables have a rigid schema, which is very complicated to update and maintain.

image

What happens if I want to create a new version of the system, whose business entities have different information stored? I would have to create a migration script, which will make the database unavailable for a certain time, and for most large databases the downtime is large enough to make the migration unaffordable.

Message-oriented middleware (MOM) databases have an interesting approach that tackles the issue. Each business object is part of a message that is exchanged between services. MOM databases store the entire message in one special field (column) within a data table (usually of XML type), allowing them to evolve the schema for their message entities and keep the DB schema (there is no DB downtime). Each entity is carefully designed to use simple data types and have an unique ID (to be uniquely identifiable). Having a standard representation helps easier the migration path when the business objects have their schema partially modified (an old application can just have their mapping information updated, and there is no need to update the DB schema).

Designing Entities

If you design the business data storage using a similar design as MOMs, your entities will look like the following:

image

  • Every entity will have an unique identifier that will become their identity within the system (and probably with external systems) and will be of a type.
  • Every entity will have their data and relationships drawn as a directed graph
  • Every data will have their type information embedded (so a person’s photo can be interpreted as a bitmap)

Following these principles will lead you to entities that are easy to represent and transmit in standard formats such as RDF or plain XML. Designing entities with unique identifiers also help synchronizing information that is related to a single entity.

Representing entities in RDF

The Semantic Web standardizes the way each entity is transmitted (and imported/exported) from Semantic Web systems. The entity we described in the preceding section can be represented in RDF as:

image

Evidently, most RDF entities will be much more complex than this example yet the example is useful to note:

  • The type information is implicit when using a custom namespace and Tag name
  • The unique identifier is standardized within RDF
  • Attributes and relationships are stored as child tags

What is the logic behind RDF?

The sample entity is actually a collection of facts about the entity (representation). The facts are:

  • Entity http://people/johnDoe is of type Person
  • http://people/johnDoe has a Name of "John Doe" (string literal)

Semantic Web applications combines these facts with logic to represent more complex queries (for example, querying the list of HumanBeings (synonym) should return John Doe in the results).

image

The directed graph and the collection of facts are different views of the same information. Every piece of information is stored as: [Entity] [Relationship] [Value], which is the same as [Subject] [Predicate] [Object]. This is also known as triple pattern in SPARQL.

In natural language, you could write [Orson Welles] [was born on] [May 6, 1915]. It is the same when relating 2 entities: [Orson Welles] [directed] [Citizen Kane] *. The entities don’t have to be of the same type (in this example I’m relating a Person to a Movie).

(*) A better design would be to design [Citizen Kane] [was directed by] [Orson Welles], but let’s put this discussion off for now.

Can’t I just design a database this way?

Simplifying the model, you could have tables with the "Subject", "Predicate" and "Object" columns. The first 2 columns could be unique identifiers (a relationship must also be identifiable) where the last column should be a special field that can store either a reference to another entity (URI), a literal value (string, date/time, decimal, etc.) or a blob.

For simple facts and queries this approach may work. It could even be the starting point for Semantic Web system. Things get more complicated when adding validations (you can’t have more than 1 birthday but you can have more than 1 name) or inferences that generate new information ( "A isMotherOf B" whenever "B is child of A andAlso A is a woman"). This is why Semantic Engines are built (they hide the complexity of storage and provide an interface to access the Semantic information).

Products like IMM work with a backend database, and they use an intermediate component to encapsulate the process of retrieving and storing entities, and performing queries.

What’s next?

I’m planning to write an article about SPARQL (the Semantic Web querying language, standardized by the W3C) and ontologies, which give a context (and a meaning within the context) to the Semantic Web information, setting a common language.

Many people say that Web 3.0, the “Semantic Web”, is coming. But what is the Semantic Web all about?

In the beginning, Internet was created as a way to share information between people in different organizations, like universities and research centers. In the early 90’s the Internet became of public usage and grew exponentially, and so grew the information shared by people. An automated way to process the always increasing amount of information was needed. Nowadays, it is not enough to only use a keyword-based Web Search Engine to find the information on the web. This is where the Semantic Web comes in.

The Semantic Web represents the collection of standards that were created to share information not only between humans but also between the machines. It starts by defining a common language (XML) and representation for the shared data (RDF). But the Semantic Web requires more than a common language and representation.

image

Do computers “understand” RDF?

In order for machines to be able to process the information, machines must not only be able to read the information but also to “understand” them. I’m referring not only to understanding that “<sw:Developer>” is an XML tag in a namespace, but also that Developer represents an entity within a context (Ontology).

In order to retrieve the information that is put in RDF (XML) under a specific Ontology (OWL), a query language is needed. That is why SPARQL was created, and also to provide basic inferences; for example, every “Developer” is an “Southy” (within the company context ;) ), so you will probably want to query for “all Southies” instead of Developers. A programmer can then create SPARQL queries that the systems use to retrieve this interchangeable information.

 

image

Queries in the Semantic Web

So how does a SPARQL query look like?

In the example below, I want to answer the following question “How can we choose 2 people that had worked together, both have a high English level and at least 1 has management skills?”

PREFIX sw: <http://southworks.net/core/testOntology>

SELECT ?x ?y

WHERE

{
 
 

 

  ?x sw:workedWith ?y ;

     sw:hasMgmtSkills “true“^^xsd:boolean;

     sw:englishLevel “high”.

  ?y sw:englishLevel “high”.

}

This is a complex query where I’m querying not only for facts but also information that is inferred from these facts.

Note: For an interesting example of Advanced SPARQL queries in IMM, please see http://blogs.msdn.com/imm/archive/2008/10/28/advanced-sparql-in-imm.aspx

Evolving the Semantic Web

There is no need to be too sceptic to see that humans still need to write the queries, and there is still no released commercial system that answer natural language questions which are translated to SPARQL queries (products like Cypher are still in Beta). What will the future bring on this side may depend on the evolution of these standards and the evolution of massive parallel computing.

I’ve left out an important detail though, that the information made available is naively trusted. When these systems publish and consume the information on the Internet, there must be a way to prove that the information is true, and to establish a trust relationship with the publisher (as we humans do). So far there are no implementations of an actual trust network, but plans for a public network based on a Public Key Infrastructure may turn into pillars for the future Trusted Semantic Web.

Next articles

I’m planning to write specific articles on:

  • Real world applications, like IMM
  • Semantic engine internals
  • SPARQL

Stay tuned!

During the last several months we’ve been helping David Aiken, James Conard and the rest of the DPE team to push forward to complete this project.

The project contains:

  • A set of 20+ labs that can be used to learn and practice several technologies such as WCF, WF, CardSpace, ASP.NET AJAX, VSTO (Word, Excel, Outlook and SharePoint), and SilverLight
  • A set of decks (30+) that shows the technology key concepts
  • 14 demo scripts that were used to present the technologies to customers and the community

All of the content can be easily navigated through HTML Web pages.

 

We think this is a very important .NET learning material for the community, which Microsoft has made available for download.

Don’t miss David’s original post.

Problem:

In Windows Server 2008 RC0 + .NET 3.5, JSON services do not work. An exception message is returned. If behavior is enabled, a “Method not found” exception is returned.

Cause:

Windows Server 2008 RC0 has an incorrect version of the System.Runtime.Serialization.dll file. In that DLL, the DataContract class does not contain a GetIdForInitialization method (which is invoked form System.ServiceModel.Web). The Windows Vista version contains that method, as shown below:

BP0001_F01

Workaround:

Use the System.Runtime.Serialization.dll file from the Windows Vista version.

  • Copy the file on the user’s desktop
    BP0001_F02
  • Open a Visual Studio 2008 Command Prompt as administrator
    BP0001_F03
  • Go to the C:\Users\username\Desktop folder
  • Run gacutil /i System.Runtime.Serialization.dll /f
  • Run iisreset (to ensure the runtime is unloaded)

If you are creating XML documents or fragments from a String (for example, when using a template engine to create XML fragments), you may have wondered if there is a .NET method that escapes the string.

For those cases, this code snippet may come in handy:

string escaped = SecurityElement.Escape("This  is being escaped!");

You can obtain more information about this method in this page.

 

Cheers!

Gabriel

Hi Folks!

This is a project I was involved in, helping DPE, so I'm proud to announce that it was just released to the public.

This Developer Training Kit is composed of 7 labs:

  • Introduction to Windows Communication Foundation 
  • Integrating CardSpace into Web Sites 
  • Introduction to Windows Workflow Foundation 
  • Using Windows Eventing 
  • Extending Windows PowerShell and the Microsoft Management Console 
  • Extending IIS 7.0 with Custom Handlers 
  • Using Transactional NTFS (TxF)  
  • For more information, please read James' original post: http://blogs.msdn.com/jamescon/archive/2007/07/17/just-released-windows-server-2008-developer-training-kit-beta-3.aspx

    The download is available at: http://www.microsoft.com/downloads/details.aspx?FamilyId=B36EE81A-AFF5-4314-95D7-DAD3ACFA8094&displaylang=en

    Cheers!

    Gabriel