Home » Microsoft Technologies

TableServiceContext query Uri is different when generated from Web Role in Dev Fabric vs. Azure

We have a LINQ query on the Partion and Rowkey against an Azure table storage instance that is generated differently when the web role is running in the dev fabric versus the Azure fabric. The query Uri (determined by calling ToString() on the IQueryable that is generated from the LINQ query) is generated as a filter when the web role is running on a dev machine and not in the fabric.

Dev Fabric Uri:

https://foo.table.core.windows.net/Bars()?$filter=(RowKey eq 'a1@gmail.com') and (PartitionKey eq '41e0c1ae-e74d-458e-8a93-d2972d9ea53c')

Azure Fabric Uri:


I believe this is causing the well known ResourceNotFound errors when methods like FirstOrDefault are called. We know the ways to handle this.

My question is why is the Uri generated differently when generated in a dev web role versus an Azure web role?

Here is some code that is similar to what was used to generate the Uri's in both cases.

TableServiceContext context = new TableServiceContext();

var qry = context.Somethings.Where(m => m.RowKey == Username && m.PartitionKey == ProjectID);


if (qry.FirstOrDefault() == null) {
  // ^ This statement generates an error when the web role is running
  // in the fabric


2 Answers Found


Answer 1

I think this is the same question I answered a bit ago on StackOverflow.  Check the answer there (http://stackoverflow.com/questions/3340448/determine-request-uri-from-wcf-data-services-linq-query-for-firstordefault-agains/3340755#3340755), copied here for everyone else's benefit:


See http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/26/how-wcf-data-service-changes-in-os-1-4-affects-windows-azure-table-clients.aspx.

Specifically, it used to be the case (in previous Guest OS builds) that if you wrote the query as you did (with the RowKey predicate before the PartitionKey predicate), it resulted in a filter query (while the reverse, PartitionKey preceding RowKey) resulted in the kind of query that raises an exception if the result set is empty.

I think the right fix for you (as indicated in the above blog post) is to set the IgnoreResourceNotFoundException to true on your context.


Answer 2

Thanks Steve. It was the same question (tweaked for here) and you provided the right answer. I have also update my original question on Stackoverflow to include the findings: http://stackoverflow.com/questions/3340448/determine-request-uri-from-wcf-data-services-linq-query-for-firstordefault-agains .


I'm running the July CTP on 64-bit Server 2008. With either VS2008SP1 or VS2010 Beta 1, I get similar problems. When I create a new cloud project, I try to run without making any changes. Eventualym the fabric times out and dies.

I can create and run web projects -- they're set to run against the dev server, not IIS.

IIS 7 is installed, ASP.NET is isntalled and WCF HTTP activation is enabled.

What I've tried:
1. Uninstalling and reinstalling the tools and SDK.
2. Setting the web role to run in full trust.
3. Removing all unused references.
4. My user name does NOT have a space in it.

Some additional information:
1. Local Table storage isn't spinning up (port conflict, I guess). I've fixed this issue before on a July CTP machine (Vista 32-bit), and it didn't help.
2. Whenver I try to run, I get a security log failure against the role host (Caller Process Name: C:\Program Files\Windows Azure SDK\v1.0\bin\devfabric\RdRoleHost.exe)

I'm running as admin.

Any ideas how to fix this?


Hi folks.

For some time there had been the issue, that Azure Dev fabric repeatedly crashes (dfloadbalancer) while using DotNetOpenAuth for OpenID authentication. Although everything works in production, local development required some workarounds.


Stack Overflow contains more detailed description of the problem (with the stack trace in Microsoft.ServiceHosting.Tools.DevelopmentFabric.LoadBalancer). Recently another similar problem (RedirectingResponse.AsActionResult()) was reported with work-around by Michael Gorsuch .

Does this help to identify, reproduce and resolve the issue with Azure DevFabric LoadBalancer? Is there any progress?


See the code below, it works perfectly when i development in my local machine,

however when i try to deploy onto the cloud, the status of my worker role keep swaping between "initialing" "busy" "stopping" "stopped" and then start from "initialing" again


Does anyone know what happened???

using (ServiceHost host = new ServiceHost(typeof(GraphicProcessServiceImpl)))
        string domain = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["tcpinput"].IPEndpoint.Address.ToString();
        int tcpport = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["tcpinput"].IPEndpoint.Port;
        int mexport = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["mexinput"].IPEndpoint.Port;

        // Add a metadatabehavior for client proxy generation
        ServiceMetadataBehavior metadatabehavior = new ServiceMetadataBehavior();
        Binding mexBinding = MetadataExchangeBindings.CreateMexHttpBinding();
        string mexlistenurl = string.Format("http://{0}:{1}/MyService", domain, mexport);
        string mexendpointurl = string.Format("http://{0}:{1}/MyService", domain, 8001);
        host.AddServiceEndpoint(typeof(IMetadataExchange), mexBinding, mexendpointurl, new Uri(mexlistenurl));

        Trace.TraceInformation("\n {0} \n {1}", mexlistenurl, mexendpointurl);

        // Add the endpoint for Service
        string listenurl = string.Format("http://{0}:{1}/MedGrap", domain, tcpport);
        string endpointurl = string.Format("http://{0}:{1}/MedGrap", domain, 9001);
        BasicHttpBinding binding = new BasicHttpBinding(BasicHttpSecurityMode.None);
        binding.MaxBufferSize = int.MaxValue;
        binding.MaxReceivedMessageSize = int.MaxValue;
        binding.TransferMode = TransferMode.Buffered;
        binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None;
        binding.ReaderQuotas.MaxArrayLength = int.MaxValue;
        binding.SendTimeout = TimeSpan.FromHours(1);
        binding.ReceiveTimeout = TimeSpan.FromHours(1);
        binding.MessageEncoding = WSMessageEncoding.Mtom;
        host.AddServiceEndpoint(typeof(IGraphicProcessService), binding, endpointurl, new Uri(listenurl));

        Trace.TraceInformation("\n {0} \n {1}", listenurl, endpointurl);

        while (true)


I have a RIA application that is working well locally in the Development fabric. It has an authentication database and another database both on SQL Azure. I've just migrated my Web Role to the cloud and the connectivity to both databases seems to be lost. Login times out and when I look at the exception with Fiddler it says "Unable to conntect to SQL Server database".

Both databases are within the same server and the server Firewall setting allows both my development machine and Microsof Services to reach the database.

The Web Role is within an Affinity group and I am quite sure that the SQL server is in the same affinity group although I can't figure out how to verify through the Portal UI.

It seems like I am missing something really simple...

Don Rule


Hi there; I am very new to the Windows Azure platform and I started learning it from the last week. I have 2 questions 

1) What is the exact difference between the Windows Azure and Azure App Fabric

2) Does Microsoft provide any free Windows Azure accounts for the developers ? If so please provide me the relevant resources.


Thanks in advanced


I am facing an issue with my development fabric. When I start my cloud service on dev fabric, both the web and worker roles start up neatly. However, when I access my webrole through the browser, I always get an error that the Loadbalancer has stopped.

Please note that my webrole is working absolutely fine on the staging environment. I am not able to debug my app on the local env. Any idea why the dev fabric is behaving so wierd?

Hello All,

I have been doing some testing with my application but now the Development Storage has started to creak under load.

I have lots of blobs (10s thousands) , containers  (10s thousands) and a huge azure table (in DF) all was working well although it takes 9GB of RAM.

Testing is completed and so i have tried to reset the storage but i guess i have just put too much data in it, as i get a timeout from the Development Storage saying the reset could not be completed in time.

So my question is how can i delete / reset my dev fabric?

Dbase hack or reinstall?


Hi All,

I am trying to use VSTS 2010 features to Build, Deploy and Test Azure packages onto local dev fabric.

During the deployment process (using a variation of LabDefaultTemplate) - I am trying to deploy the csx folder along with the ServiceConfiguration.cscfg file using the csrun.exe onto the dev fabric of a TestLab - VM machine.

The csrun.exe fails with the following error during the deployment process:

Encountered an unexpected error Access is denied at Microsoft.ServiceHosting.Tools.Utility.NativeMethods.StartWithBreakAway(ProcessStartInfo startInfo)
at Microsoft.ServiceHosting.Tools.Utility.ProcessWrapper.EnsureStarted(Boolean ensureVisable)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.DevFabric(DFCommands acts)

at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.ParseArguments(String[] args, Boolean doActions)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.ExecuteActions(String[] args).

Unhandled Exception: System.ComponentModel.Win32Exception: Access is denied
at Microsoft.ServiceHosting.Tools.Utility.NativeMethods.StartWithBreakAway(ProcessStartInfo startInfo)
at Microsoft.ServiceHosting.Tools.Utility.ProcessWrapper.EnsureStarted(Boolean ensureVisable)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.DevFabric(DFCommands acts)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.ParseArguments(String[] args, Boolean doActions)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.ExecuteActions(String[] args)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.Program.Main(String[] args)

Any thoughts on how to fix this issue.


My WorkerRole seems to just fine locally. However, it wont run on Azure itself - the role just keeps coming up as 'Busy' and never reaches the ready/running stage.

The associated web role works perfectly.

Could this just be a minor config bug or?

I've gone through and checked out http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/6c739db9-4f6a-4203-bbec-eba70733ec16 . I've manage to get Intellitrace working, and I'm getting the following error (it is the first of a list, so i assume it's having knock-on effects?): Call Stack Location WaWorkerHost.exe!Microsoft.WindowsAzure.Hosts.Worker.Loader.DebuggerAttach() IntelliTrace time contect: : Thrown: "No handle of the given name exists." (SystemThreading.WaitHandleCannotBeOpenedException). I'm at a loose end as to what is actually causing this?

In addition, I'm getting a System.IO.FileNotFoundException for the associated web role, even though the web role runs fine in the cloud and it's 'copy local' attribute IS set to true.


Not exactly sure what is going on here or how to even BEGIN to fix it.  - really at a loose end here!




I see that Guest OS 1.4 finally addresses the issue of reserved characters not being encoded in a table storage query (see my previous post) - however the issue does not appear to be fixed in the development fabric?

I tried upgrading to the June SDK 1.2, and I have updated all my references to the storage client in this SDK - but I still get "Invalid Input" errors.

How can I replicate the correct behaviour in the dev fabric?




I have been struggling a bit with running a web role that uses unmanaged dlls.

In the ordinary ASP.NET web project deployment, the unmanaged dlls are found using the path specified in the PATH environment variable.

However, if the web project is deployed to the development fabric, the unmanaged dlls are not found.

I have tried to set the PATH environment variable to the bin directory of my package, but this did not help.

Any help is much appreciated.

Thanks in advance.




has anyone got any experience in using fuslogvw.exe while running an app in the Dev Fabric?

I want to analyze assembly bindings performed by the CLR but fuslogvw.exe seems to work only sporadically if my app runs in the Dev Fabric.

However, fuslogvw.exe works always fine if the app runs locally without the Dev Fabric.

Your help is much appreciated.




I am trying to load test my java web application on the dev fabric, that uses the Azure Storage API. I use JMeter to do this. I am working on Windows 7 Machine.

I am getting the below mentioned exceptions when I load test:

org.soyatec.windows.azure.error.StorageServerException: Server encountered an internal error. Please try again after some time.
<?xmlversion="1.0"encoding="utf-8"?><Error><Code>InternalError</Code><Message>Server encountered an internal error. Please try again after some time.


Could anyone please suggest what the problem is. Also what I notice is that the above error does not occur if the number of concurrent threads is few e.g. 2-3. As I increase the number of concurrent threads, I get the error.

I have a question about pointing domain to Azure web role URL. I updated my CNAME domain configuration for 'www' to point to Azure web role URL, so now if I go to my domain www.MyDomain.com it works, but not for MyDomain.com. Is there any way to overcome this limitation?

We are creating a diagnostics client to monitor the performance of a Web Role that is WCF web service. We enabled the WCF diagnostics in web.config and added a few of the ServiceModelService counters. We are able to see the values being logged to the Azure Table when running in Dev Fabric. But when we publish this to Azure, we are not able to capture any of the WCF counters. We are able to get other counters like Processor Time. Is there any other specific configuration required for getting data for these counters?



Is there a way to enforce 128-bits as the minimum SSL encryption level?  This is possible with IIS (http://support.microsoft.com/kb/245030) but I'm not sure if these knobs are exposed in an Azure Web Role.  This is an important requirement because variable rate encryption, wherein the browser chooses the encryption level, can result in 40-bit encryption which is unsuitable for sensitive data.



I want to display some charts in the web role of my Azure application. I am trying to use the ASP.NET charting control for the same. The chart gets rendered properly in the development fabric, but when I deploy the application on Azure fabric I get the
"System.OutOfMemoryException: Out of memory" exception
in the line chart.Titles.Add(t);
Is the ASP.NET charting control not supported by the Azure fabric? If not ,are there any other charting APIs that I can use?


Hello Guys,

I have developed one Asp .Net web role application using VS 2010. What I am trying to do like, I want to upload image into Azure storage. So I have created Storage Account in windows azure developer portal. I have written code for this in my Asp .Net application. But when I run my application in VS 2010 for testing whether its working properly or not before hosting in Windows Azure. Its giving some connection problem.

I am not sured about my connection setting. Which setting I should select for this (Use development Storage or Enter Storage Credential), when I run my application in visual studio. Currently I have selected the Enter Storage Credential and filled up all the required information like End point Protocols, Storage Account Name and Account Key.

Rather than this do I need to do any more changes in WebRole.cs file or web.config or WebRole Settings.

Just look at my code which I have mentioned below.

Default.aspx.cs code




CloudStorageAccount objStorage = CloudStorageAccount.FromConfigurationSetting("BlobConnectionString");CloudBlobClient objClient = newCloudBlobClient(objStorage.BlobEndpoint, objStorage.Credentials);

CloudBlobContainer objContainer = objClient.GetContainerReference("mycontainer");objContainer.CreateIfNotExist();}protectedvoid UploadImageToAzure_Click(object sender, EventArgs e) if (bwsImage.HasFile){

String fileName = bwsImage.FileName;CloudStorageAccount objStorage = CloudStorageAccount.FromConfigurationSetting("BlobConnectionString");CloudBlobClient objClient = newCloudBlobClient(objStorage.BlobEndpoint, objStorage.Credentials);CloudBlobContainer objContainer = objClient.GetContainerReference("mycontainer");CloudBlob obj = objContainer.GetBlobReference(fileName.ToString());obj.Metadata["MetaName"] = "meta";






BlobStream blobstream = obj.OpenWrite();blobstream.Write(bwsImage.FileBytes, 0, bwsImage.FileBytes.Count());IEnumerable<IListBlobItem> objBlobList = objContainer.ListBlobs();  




foreach (IListBlobItem objItem in objBlobList){"<br>");}}}}}

Response.Write(objItem.Uri +


WebRole.cs Code



WebRoleImage{publicclassWebRole : RoleEntryPoint{ 

publicoverridebool OnStart(){DiagnosticMonitor.Start("DiagnosticsConnectionString");


CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>{configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));RoleEnvironment.Changed += (sender, arg) =>{if (arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>() if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))){Environment.RequestRecycle();}}};});




.Any((change) => (change.ConfigurationSettingName == configName))){



RoleEnvironment.Changing += RoleEnvironmentChanging;returnbase.OnStart();}privatevoid RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) if (e.Changes.Any(change => change isRoleEnvironmentConfigurationSettingChange)){







 e.Cancel = true;}}}}

Am I doing anything wrong.

Please look into this issue and get back to me. This is very urgent.

Thanks in Advance.


Chiranjibi Das





namespace WebRoleImage{publicpartialclass_Default : System.Web.UI.Page{protectedvoid Page_Load(object sender, EventArgs e){


I'm trying to run an existing ASP.NET MVC 2 application in the development fabric (1.2) using VS2010 on Windows 7. I'm following the Windows Azure Platform Training Kit Hands-On Labs (June 2010 update). When hitting F5 I see a blank screen.

I also tried to create a completely new solution, starting with a new cloud service and adding a new MVC web role. This works, but only when the MVC WebRole targets .NET Framework 3.5. After changing this to .NET Framework 4.0 again a blank screen appears. 

Did anyone have the same issues ?



We want to use the service bus to address the following scenario:
 a client C invokes a WCF service hosted in a Web Role W;  in reaction to this interaction, the WCF service in W invokes, through the AppFabric Service Bus, another WCF service R hosted by a computer behind a firewall For the communication between the Web Role W and the service R we must use one of the binding based upon HTTP, as firewall rules dictate. We tried all the provided bindings -- wsHttp2007RelayBinding, basicHttpRelayBinding and webHttpRelayBinding -- and we noticed that, while using the development fabric we have no problems, once we deploy the Web Role W on Azure the communication does not always work reliably as expected. In particular:

as reported in the release notes of Nov 2009 CTP, wsHttp2007RelayBinding often times out (this happens also in the dev fabric); basicHttpRelayBinding sometimes throws an exception complaining about a connection with KeepAlive=true being closed by the remote party; webHttpRelayBinding throws an exception that says "The message has an unexpected format other than SOAP and Http." On the contrary, we have no problem if we use netTcpRelayBinding. As we already stated, we cannot use netTcpRelayBinding but we must opt for HTTP bindings.

In conclusion, our question is: can we depend on these HTTP bindings -- that is, is it worth to investigate and resolve the problems we have -- or only netTcpRelayBinding is supported and we should give up?

<< Previous      Next >>

Microsoft   |   Windows   |   Visual Studio   |   Sharepoint   |   Azure