David Catuhe Download Use Kinect As Mouse UPDATED

David Catuhe Download Use Kinect As Mouse

Kinect Keyboard Simulator & Kinect Sabre for Kinect For Windows SDK one.0

Following the official release of Kinect for Windows SDK 1.0 and Kinect Toolbox, I'm pleased to share with you 2 (useful ?) samples I wrote:

https://world wide web.catuhe.com/msdn/kinecttools.zip

Kinect Keyboard Simulator

This tool allows you to send keys to a specified application when gestures are detected. An obvious usage is to change slides on Powerpoint when you make a swipe to left or to right.

Kinect Sabre

Kinect Sabre is THE mandatory tool for your Kinect!! It creates a reality augmented vision of yourself with a Lite SABER in your left hand!!! (Warning: Y'all need to install XNA Game Studio 4.0 to apply Kinect Sabre)

Hope you like these tools Sourire

MishraReader i.0.2.0 (beta 2) is out!

After several months of coding, we are proud to denote the beta 2 of MishraReader. This version is a full rewrite of the beta1 to include MVVM and dependencies injection blueprint patterns.

But the major characteristic of this version is the reshaping of the user interface to support METRO guidelines:

You can catch information technology freely here (MishraReader is available in french and english):

https://mishrareader.codeplex.com

To use it, but follow these instructions:

Connectedness

On this screen, yous only take to type your Google Reader account information and click on [SIGN IN]:

Using MishraReader

One time connected, you are able to run across unread/starred or all posts:

Using the [Prove] dropdown menu, you lot can select only the subscription y'all want to read:

When y'all are reading a post, you can:

Configuring MishraReader

Using the settings menu, you tin can configure dissimilar options (don't forget to click on [Relieve CHANGES]):

Account

In the business relationship screen, y'all are able to disconnect current account and decide if you want to automatically mark items as read when you select them:

Sharing Services & Bookmark Services

With this screen, you lot tin configure sharing and bookmark services.

Brandish

MishraReader can use 8 different accent colors that you can select using the Brandish screen.

You can besides choose to:

  • Evidence simply posts summary (instead of using a web view of the full mail service) which is quicker to load
  • Utilize a notification icon and in this case show or hibernate the main window in the taskbar

Network

The network screen allows you to select:

  • the quantity of item downloaded per request (between x and 500)
  • the automatic refresh interval (between 1 infinitesimal and 1 hr)

Conclusion

I hope you will like this version as we worked hard to make it the meliorate feeds reader bachelor!

Exercise not hesitate to give usa your feedbacks using the https://mishrareader.codeplex.com site Clignement d'Å“il

Official Kinect for Windows SDK and Kinect Toolbox ane.1.one are out!

purchase-hero

The official Kinect for WIndows SDK is out and you tin can grab it here:

https://www.microsoft.com/en-us/kinectforwindows/develop/overview.aspx

The central points are:

  • As long equally yous use a Kinect for Windows sensor (not the XBox360 one) and the official SDK, you tin can develop commercial applications using Kinect technologies.
  • New near way for depth values (no skeleton tracking in this version) which enables the depth camera to run across objects equally close every bit 40 centimeters
  • Up to 4 sensors tin can be connected to the aforementioned figurer

Alongside with the SDK a new sensor is bachelor. If you desire to buy it ($249.99), y'all can get there:

https://world wide web.microsoft.com/en-us/kinectforwindows/purchase/

Of course the Kinect Toolbox ane.i.one is also out and supports the final version of Kinect for Windows SDK:

https://kinecttoolbox.codeplex.com/

The NuGet package can be constitute in that location:

https://nuget.org/List/Packages/KinectToolbox

Utilize the ability of Azure to create your ain raytracer

6d594911-ad38-43af-ae46-2df304f020bb[1]The power available in the cloud is growing every 24-hour interval. So I decided to use this raw CPU power to write a small raytracer.

I'm certainly not the first one to have had this idea as for example Pixar or GreenButton already apply Azure to render pictures.

During this commodity, nosotros will see how to write our own rendering system using Azure in order to be able to realize your own 3D rendered pic.

The article will be organized effectually the following centrality:

  1. Prerequisites
  2. Architecture
  3. Deploying to Azure
  4. Defining a scene
  5. Web server and workers roles
  6. How information technology works?
  7. JavaScript customer
  8. Conclusion
  9. To get further

The final solution tin can be downloaded here and if you want to see the terminal upshot, please go in that location: https://azureraytracer.cloudapp.internet/

image94

You can utilise a default scene or create your own scene definition (we will meet later on how to exercise that).

The rendered pictures are limited to a 512×512 resolution (you tin can of class alter this settings).

Prerequisites

To be able to use the project, y'all must have:

  • a Visual Studio 2010 version (Express version is supported): https://www.microsoft.com/visualstudio/en-us/products/2010-editions
  • Windows Azure SDK: https://www.windowsazure.com/en-u.s./develop/downloads/

Yous will as well need an Azure account. You lot can get a free one just there: https://www.windowsazure.com/en-the states/pricing/free-trial/

Architecture

Our architecture tin can exist defined using the following schema:

image_thumb49

The client will connect to a spider web server equanimous of one or more web roles (in my example, at that place are two spider web roles). The web roles will provide the web pages and a web service used to get the status of a asking. When an user wants to render a picture, the associated web role will write a render message in an Azure queue. A farm of worker roles will read the same queue and volition process any incoming render message. Azure queues are transactionals and diminutive so simply i worker role will grab the order. The first available worker will read and remove the message. Every bit queues are transactionals, if the worker role crashes the render bulletin is reintroduced in club to avoid losing your piece of work.

In our sample, I decided to apply a semaphore in social club to limit the maximum number of requests executed concurrently. Indeed, I adopt not to overload my workers in order to requite maximum CPU ability to each render task.

Deploying to Azure

Later on opening the solution, you will be able to launch it direct from Visual Studio inside the Azure Emulator. Yous will be and then able to debug and fine melody your code before sending it to the product phase.

Once y'all're gear up, yous can deploy your package on your Azure account using the following procedure:

  • Open the "AzureRaytracer.sln" __solution within Visual Studio
  • Configure your Azure account: to do so, correct click on the "AzureRaytracer" project and choose "Publish" carte du jour. You volition become the following screen:

image_thumb11

  • Using this screen, please choose "Sign in to download credentials" option which will let you download an automated configuration file on your Azure account :

image_thumb13

  • In one case the file is downloaded, we volition import it inside the :

image_thumb15

  • After importing the data, Visual Studio will inquire you to give a name for the service:

image_thumb50

  • The next screen will present a summary of all selected options:

image_thumb19

  • Earlier publishing, we must modify some parameters to ready our bundle to the product stage. First of all, we accept to get the Azure portal: https://windows.azure.com. Get the the storage accounts tab to take hold of required information:

image_thumb22

  • On the right pane, you tin can go the master admission fundamental:

image_thumb51

  • With this information, you can go to your projection:

image_thumb26

  • On every function, you take to go to the settings menu in club to define the Azure connectedness cord (you volition use here the information grabbed on the Azure portal):

image_thumb29

  • You must change the "AzureStorage" value using the "…" button:

image_thumb31

  • In the Configuration tab, you lot can change the instance count for each role:

image_thumb33

image_thumb35

  • For more information about instance sizes: https://msdn.microsoft.com/en-US/library/ee814754.aspx)
  • Finally you volition be able to publish your packet:

image_thumb37

Your raytracer is at present ONLINE !!! We will no meet how to utilize information technology Sourire

Defining a scene

To define a scene, you have to specify it using an xml file. Here is a sample scene:






  1. <? xml version ="one.0" encoding ="utf-8" ?>


  2. < scene FogStart ="five" FogEnd ="20" FogColor ="0, 0, 0" ClearColor ="0, 0, 0" AmbientColor ="0.one, 0.1, 0.1">


  3. < objects >


  4. < sphere Name ="Red Sphere" Middle ="0, 1, 0" Radius ="1">


  5. < defaultShader Diffuse ="i, 0, 0" Specular ="i, 1, 1" ReflectionLevel ="0.6"/>


  6. </ sphere >


  7. < sphere Proper name ="Transparent Sphere" Middle ="-3, 0.v, 1.5" Radius ="0.5">


  8. < defaultShader Diffuse ="0, 0, i" Specular ="1, 1, 1" OpacityLevel ="0.iv" RefractionIndex ="2.viii"/>


  9. </ sphere >


  10. < sphere Proper name ="Green Sphere" Center ="-3, ii, 4" Radius ="1">


  11. < defaultShader Lengthened ="0, 1, 0" Specular ="1, one, 1" ReflectionLevel ="0.6" SpecularPower ="10"/>


  12. </ sphere >


  13. < sphere Proper name ="Yellow Sphere" Center ="-0.five, 0.three, -2" Radius ="0.three">


  14. < defaultShader Lengthened ="1, 1, 0" Specular ="i, i, 1" Emissive ="0.3, 0.three, 0.three" ReflectionLevel ="0.half-dozen"/>


  15. </ sphere >


  16. < sphere Proper name ="Orange Sphere" Centre ="one.5, two, -ane" Radius ="0.5">


  17. < defaultShader Diffuse ="one,0.v, 0" Specular ="i, 1, 1" ReflectionLevel ="0.half-dozen"/>


  18. </ sphere >


  19. < sphere Name ="Gray Sphere" Center ="-2, 0.2, -0.5" Radius ="0.2">


  20. < defaultShader Diffuse ="0.5, 0.5, 0.5" Specular ="1, i, 1" ReflectionLevel ="0.6" SpecularPower ="ane"/>


  21. </ sphere >


  22. < footing Proper noun ="Aeroplane" Normal ="0, 1, 0" Offset ="">


  23. < checkerBoard WhiteDiffuse ="1, 1, one" BlackDiffuse ="0.i, 0.1, 0.1" WhiteReflectionLevel ="0.1" BlackReflectionLevel ="0.5"/>


  24. </ footing >


  25. </ objects >


  26. < lights >


  27. < light Position ="-ii, 2.five, -1" Color ="one, 1, 1"/>


  28. < calorie-free Position ="one.5, two.5, 1.five" Colour ="0, 0, 1"/>


  29. </ lights >


  30. < camera Position ="0, 2, -6" Target ="-0.5, 0.5, 0" />


  31. </ scene >




The file structure is the following:

  • A [scene] tag is used as root tag and allows you lot to ascertain the following parameters:
  • FogStart / FogEnd : Define the range of the fog from the photographic camera.
  • FogColor : RGB color of the fog
  • ClearColor : Background RGB color
  • AmbientColor : Ambient RGB

  • A [objects] tag which contains the objects list

  • A [lights] tag which contains the lights list
  • A [camera] tag which ascertain the scene camera. It is our point of view, divers by the following parameters:
  • Position : Photographic camera position (X,Y,Z)
  • Target : Camera target (Ten, Y, Z)

All objects are defined by a proper name and tin can be of ane of the post-obit type:

  • sphere : Sphere defined by its center and radius
  • ground : Plane representing the ground defined by its beginning from 0 and the management of its normal
  • mesh : Circuitous object divers by a list of vertices and faces. It tin be manipulated with 3 vectors:Position, Rotation and Scaling:





  1. < mesh Name ="Box" Position ="-three, 0, 2" Rotation ="0, 0.7, 0">


  2. < vertices count ="24">-1, -one, -one, -1, 0, 0,-1, -1, ane, -1, 0, 0,-1, 1, 1, -1, 0, 0,-1, one, -ane, -1, 0, 0,-1, 1, -i, 0, 1, 0,-1, 1, 1, 0, 1, 0,i, one, one, 0, ane, 0,one, 1, -1, 0, i, 0,i, 1, -one, one, 0, 0,1, 1, 1, i, 0, 0,one, -ane, 1, 1, 0, 0,1, -1, -1, 1, 0, 0,-one, -1, 1, 0, -1, 0,-1, -1, -1, 0, -1, 0,i, -1, -1, 0, -one, 0,i, -1, 1, 0, -1, 0,-1, -1, 1, 0, 0, 1,1, -1, 1, 0, 0, 1,1, 1, 1, 0, 0, ane,-1, 1, 1, 0, 0, 1,-one, -one, -one, 0, 0, -1,-1, 1, -one, 0, 0, -1,1, 1, -1, 0, 0, -1,1, -1, -1, 0, 0, -1,</ vertices >


  3. < indices count ="36">0,1,two,2,iii,0,4,v,6,6,7,4,8,ix,ten,x,11,viii,12,thirteen,14,fourteen,xv,12,sixteen,17,18,18,19,16,20,21,22,22,23,20,</ indices >


  4. </ mesh >




Faces are indexes to vertices. A face contains iii vertices and each vertex is defined by two vectors: position (X, Y, Z) and normal (Nx, Ny, Nz).

Objects can have a child node used to ascertain the applied materials:

  • defaultShader : Default textile divers by:
  • Diffuse : Base RGB color
  • Ambient : Ambiant RGB colour
  • Specular : Specular RGB color
  • Emissive : Emissive RGB color
  • SpecularPower : Sharpness of the specular
  • RefractionIndex : Refraction index (you must too define OpacityLevel to use it)
  • OpacityLevel : Opacity level (you lot must besides define RefractionIndex to employ information technology)
  • ReflectionLevel : Reflection level (0 = no reflection)

  • checkerBoard : material defining a checkerboard with the following backdrop:

  • WhiteDiffuse : "White" square diffuse colour
  • WhiteAmbient : "White" square ambient colour
  • WhiteReflectionLevel : "White" square reflection level
  • BlackDiffuse : "Black" foursquare diffuse colour
  • BlackAmbient : "Blackness" foursquare ambient color
  • BlackReflectionLevel : "Black" square reflection color

Lights are divers via the [light] tag which tin can accept Position and Color attributes. Lights are omnidirectionals.

Finally, if nosotros use this scene file:






  1. <? xml version ="1.0" encoding ="utf-8" ?>


  2. < scene FogStart ="5" FogEnd ="20" FogColor ="0, 0, 0" ClearColor ="0, 0, 0" AmbientColor ="1, 1, one">


  3. < objects >


  4. < basis Name ="Airplane" Normal ="0, 1, 0" Beginning ="">


  5. < defaultShader Diffuse ="0.iv, 0.4, 0.iv" Specular ="1, one, ane" ReflectionLevel ="0.3" Ambient ="0.5, 0.5, 0.5"/>


  6. </ basis >


  7. < sphere Name ="Sphere" Center ="-0.5, 1.5, 0" Radius ="1">


  8. < defaultShader Lengthened ="0, 0, 1" Specular ="one, 1, ane" ReflectionLevel ="" Ambient ="i, 1, 1"/>


  9. </ sphere >


  10. </ objects >


  11. < lights >


  12. < light Position ="-0.v, ii.5, -2" Color ="1, 1, 1"/>


  13. </ lights >


  14. < camera Position ="0, 2, -6" Target ="-0.v, 0.v, 0" />


  15. </ scene >




We volition obtain the following picture show:

53ac53ad-b971-4d5e-8526-e7a4e39c3bb1

Web server and worker roles

The web server is running nether ASP.Internet and will provide two functionalities:

  • Connection to worker roles using the queue in society to launch a rendering:





  1. void Render(string scene)


  2. {


  3. attempt


  4. {


  5. InitializeStorage();


  6. var guid = Guid.NewGuid();



  7. CloudBlob blob = Container.GetBlobReference(guid + ".xml");


  8. hulk.UploadText(scene);



  9. hulk = Container.GetBlobReference(guid + ".progress");


  10. blob.UploadText("-ane");



  11. var message = new CloudQueueMessage(guid.ToString());


  12. queue.AddMessage(message);



  13. guidField.Value = guid.ToString();


  14. }


  15. catch (Exception ex)


  16. {


  17. System.Diagnostics.Trace.WriteLine(ex.ToString());


  18. }


  19. }




As you can come across, the web server will generate for each asking a GUID to identify the rendering job. Subsequently, the description of the scene (the xml file) is copied to a blob (with the GUID as name) in order to allow the worker roles to access it. Finally a message is sent to the queue and a blob is created to requite a feedback on the asking progress.

  • Publish a web service to expose requests progress:





  1. [OperationContract]


  2. [WebGet]


  3. public string GetProgress(string guid)


  4. {


  5. try


  6. {


  7. CloudBlob hulk = _Default.Container.GetBlobReference(guid + ".progress");


  8. string result = blob.DownloadText();



  9. if (result == "101")


  10. hulk.Delete();



  11. render result;


  12. }


  13. catch (Exception ex)


  14. {


  15. return ex.Message;


  16. }


  17. }




The web service will get the content of the blob and return the upshot. If the request is queued, the value will be –1 and if the request is finished the value will be 101 (and in this case the blob will exist deleted).

The worker roles will read the content of the queue and when a message is available, a worker will get it and will handle it:






  1. while (true)


  2. {


  3. CloudQueueMessage msg = null;


  4. semaphore.WaitOne();


  5. try


  6. {


  7. msg = queue.GetMessage();


  8. if (msg != null)


  9. {


  10. queue.DeleteMessage(msg);


  11. string guid = msg.AsString;


  12. CloudBlob blob = container.GetBlobReference(guid + ".xml");


  13. string xml = blob.DownloadText();



  14. CloudBlob blobProgress = container.GetBlobReference(guid + ".progress");


  15. blobProgress.UploadText("0");



  16. WorkingUnit unit of measurement = new WorkingUnit();



  17. unit of measurement.OnFinished += () =>


  18. {


  19. blob.Delete();


  20. unit of measurement.Dispose();


  21. semaphore.Release();


  22. };



  23. unit of measurement.Launch(guid, xml, container);


  24. }


  25. else


  26. {


  27. semaphore.Release();


  28. }


  29. Thread.Slumber(k);


  30. }


  31. grab (Exception ex)


  32. {


  33. semaphore.Release();


  34. if (msg != null)


  35. {


  36. CloudQueueMessage newMessage = new CloudQueueMessage(msg.AsString);


  37. queue.AddMessage(newMessage);


  38. }


  39. Trace.WriteLine(ex.ToString());


  40. }


  41. }




Once the scene is loaded, the worker will update the progress state (using the associated blob) and will create a WorkingUnit which will be in accuse of producing asynchronously the motion-picture show. It will enhance a OnFinished upshot when the return is done in lodge to make clean and dispose all associated resources.

We can too encounter here the usage of the semaphore in gild to limit the number of concurrent renders.

The WorkingUnit is mainly defined like this:






  1. public void Launch(string guid, string xml, CloudBlobContainer container)


  2. {


  3. try


  4. {


  5. XmlDocument xmlDocument = new XmlDocument();


  6. xmlDocument.LoadXml(xml);


  7. XmlNode sceneNode = xmlDocument.SelectSingleNode("/scene");



  8. Scene scene = new Scene();


  9. scene.Load(sceneNode);



  10. ParallelRayTracer renderer = new ParallelRayTracer();



  11. resultBitmap = new Bitmap(RenderWidth, RenderHeight, PixelFormat.Format32bppRgb);



  12. bitmapData = resultBitmap.LockBits(new Rectangle(0, 0, RenderWidth, RenderHeight), ImageLockMode.WriteOnly, PixelFormat.Format32bppRgb);


  13. int bytes = Math.Abs(bitmapData.Stride) bitmapData.Top;


  14. byte[] rgbValues = new byte[bytes];


  15. IntPtr ptr = bitmapData.Scan0;



  16. renderer.OnAfterRender += (obj, evt) =>


  17. {


  18. System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes);



  19. resultBitmap.UnlockBits(bitmapData);


  20. using (MemoryStream ms = new MemoryStream())


  21. {


  22. resultBitmap.Salvage(ms, ImageFormat.Png);


  23. ms.Position = 0;


  24. CloudBlob finalBlob = container.GetBlobReference(guid + ".png");


  25. finalBlob.UploadFromStream(ms);


  26. CloudBlob blob = container.GetBlobReference(guid + ".progress");


  27. blob.UploadText("101");


  28. }


  29. OnFinished();


  30. };



  31. int previousPercentage = -10;


  32. renderer.OnLineRendered += (obj, evt) =>


  33. {


  34. if (evt.Percentage – previousPercentage < ten)


  35. return;


  36. previousPercentage = evt.Percentage;


  37. CloudBlob blob = container.GetBlobReference(guid + ".progress");


  38. hulk.UploadText(evt.Percent.ToString());


  39. };



  40. renderer.Render(scene, RenderWidth, RenderHeight, (x, y, color) =>


  41. {


  42. var offset = x iv + y bitmapData.Stride;


  43. rgbValues[offset] = (byte)(color.B 255);


  44. rgbValues[offset + one] = (byte)(color.Grand 255);


  45. rgbValues[get-go + 2] = (byte)(color.R 255);


  46. });


  47. }


  48. take hold of (Exception ex)


  49. {


  50. CloudBlob blob = container.GetBlobReference(guid + ".progress");


  51. blob.DeleteIfExists();


  52. hulk = container.GetBlobReference(guid + ".png");


  53. blob.DeleteIfExists();


  54. Trace.WriteLine(ex.ToString());


  55. }


  56. }




The WorkingUnit works according to the following algorithm:

  • Loading the scene
  • Creating the raytracer
  • Generating the picture and accessing the bytes assortment
  • When the movie is rendered, we can salve it in a hulk and nosotros update the job progress land
  • Launching the render

The raytracer

The raytracer is entirely written in C# 4.0 and uses TPL (Job Parallel Libray) to enable parallel code execution.

The post-obit functionalities are supported (only as Yoda said "Obvious is the code", so do not hesitate to browse the code):

  • Fog
  • Diffuse
  • Ambient
  • Transparency
  • Reflection
  • Refraction
  • Shadows
  • Complex objects
  • Unlimited light sources
  • Antialiasing
  • Parallel rendering
  • Octrees

The interesting bespeak with a raytracer is that it is a massively parallelizable process. Indeed, a raytracer will execute strictly the aforementioned code for each pixel of the screen.

So the key betoken of the raytracer is:






  1. Parallel.For(0, RenderHeight, y => ProcessLine(scene, y));




And so for each line, nosotros volition execute the following method in parallel on all CPU cores of the computer:






  1. void ProcessLine(Scene scene, int line)


  2. {


  3. for (int 10 = 0; x < RenderWidth; x++)


  4. {


  5. if (!renderInProgress)


  6. return;


  7. RGBColor color = RGBColor.Black;



  8. if (SuperSamplingLevel == 0)


  9. {


  10. color = TraceRay(new Ray { First = scene.Photographic camera.Position, Direction = GetPoint(ten, line, scene.Camera) }, scene, 0);


  11. }


  12. else


  13. {


  14. int count = 0;


  15. double size = 0.4 / SuperSamplingLevel;



  16. for (int sampleX = -SuperSamplingLevel; sampleX <= SuperSamplingLevel; sampleX += 2)


  17. {


  18. for (int sampleY = -SuperSamplingLevel; sampleY <= SuperSamplingLevel; sampleY += two)


  19. {


  20. color += TraceRay(new Ray { Start = scene.Photographic camera.Position, Direction = GetPoint(x + sampleX size, line + sampleY size, scene.Photographic camera) }, scene, 0);


  21. count++;


  22. }


  23. }



  24. if (SuperSamplingLevel == i)


  25. {


  26. color += TraceRay(new Ray { Commencement = scene.Photographic camera.Position, Direction = GetPoint(10, line, scene.Camera) }, scene, 0);


  27. count++;


  28. }



  29. color = color / count;


  30. }



  31. color.Clamp();



  32. storePixel(ten, line, color);


  33. }



  34. // Report progress


  35. lock (this)


  36. {


  37. linesProcessed++;


  38. if (OnLineRendered != null)


  39. OnLineRendered(this, new LineRenderedEventArgs { Per centum = (linesProcessed * 100) / RenderHeight, LineRendered = line });


  40. }


  41. }




The main part is the TraceRay method which will cast a ray for each pixel of a line:






  1. private RGBColor TraceRay(Ray ray, Scene scene, int depth, SceneObject excluded = null)


  2. {


  3. List<Intersection> intersections;



  4. if (excluded == null)


  5. intersections = IntersectionsOrdered(ray, scene).ToList();


  6. else


  7. intersections = IntersectionsOrdered(ray, scene).Where(intersection => intersection.Object != excluded).ToList();



  8. return intersections.Count == 0 ? scene.ClearColor : ComputeShading(intersections, scene, depth);


  9. }




If the ray intersects no object then the color of the background is returned (ClearColor). In the other example, we will have to evaluate the color of the intersected object:






  1. individual RGBColor ComputeShading(List<Intersection> intersections, Scene scene, int depth)


  2. {


  3. Intersection intersection = intersections[0];


  4. intersections.RemoveAt(0);



  5. var direction = intersection.Ray.Direction;


  6. var position = intersection.Position;


  7. var normal = intersection.Normal;


  8. var reflectionDirection = direction – 2 Vector3.Dot(normal, direction) normal;



  9. RGBColor result = GetBaseColor(intersection.Object, position, normal, reflectionDirection, scene, depth);



  10. // Opacity


  11. if (IsOpacityEnabled && intersections.Count > 0)


  12. {


  13. double opacity = intersection.Object.Shader.GetOpacityLevelAt(position);


  14. double refractionIndex = intersection.Object.Shader.GetRefractionIndexAt(position);



  15. if (opacity < 1.0)


  16. {


  17. if (refractionIndex == 1 || !IsRefractionEnabled)


  18. result = result opacity + ComputeShading(intersections, scene, depth) (ane.0 – opacity);


  19. else


  20. {


  21. // Refraction


  22. issue = result opacity + GetRefractionColor(position, Utilities.Refract(direction, normal, refractionIndex), scene, depth, intersection.Object) (1.0 – opacity);


  23. }


  24. }


  25. }



  26. if (!IsFogEnabled)


  27. return event;



  28. // Fog


  29. double distance = (scene.Camera.Position – position).Length;



  30. if (altitude < scene.FogStart)


  31. return result;



  32. if (distance > scene.FogEnd)


  33. return scene.FogColor;



  34. double fogLevel = (distance – scene.FogStart) / (scene.FogEnd – scene.FogStart);



  35. return result (1.0 – fogLevel) + scene.FogColor fogLevel;


  36. }




The ComputeShading method will compute the base of operations color of the object (taking in account all calorie-free sources). If the object is transparent or uses refraction or reflection, a new ray must exist casted to compute the induced color.

At the terminate, the fog is added and the last colour is returned.

Every bit yous can see, calculating each pixel is really resources consuming. So having a huge raw power tin can drastically improve the rendering speed.

The customer

The front client is written using HTML with a pocket-size part of JavaScript in order to make information technology a fleck more dynamic:






  1. var checkState = office () {


  2. $.getJSON("RenderStatusService.svc/GetProgress", { guid: guid, noCache: Math.random() }, function (result) {


  3. var per centum = result.d;


  4. var percentageAsNumber = parseInt(percentage);



  5. if (percentage == "-i") {


  6. $("#progressMessage").text("Asking queued");


  7. setTimeout(checkState, 1000);


  8. return;


  9. }



  10. if (isNaN(percentageAsNumber)) {


  11. window.localStorage.removeItem("currentGuid");


  12. restartUI();


  13. return;


  14. }



  15. if (percentageAsNumber != 101) {


  16. $("#progressBar").progressbar({ value: percentageAsNumber });


  17. $("#progressMessage").text("Rendering in progress…" + effect.d + "%");


  18. setTimeout(checkState, grand);


  19. }


  20. else {


  21. $("#renderInProgressDiv").slideUp("fast");


  22. $("#final").slideDown("fast");


  23. $("#imageLoadingMessage").slideDown("fast");


  24. $.getJSON("RenderStatusService.svc/GetImageUrl", { guid: guid, noCache: Math.random() }, function (url) {


  25. finalImage.src = url.d;


  26. document.getElementById("imageHref").href = url.d;


  27. });


  28. window.localStorage.removeItem("currentGuid");


  29. }


  30. });


  31. };




If the spider web service returns –1,the asking is queued. If the returned value is betwixt 0 and 100, we tin update the progress bar et if the value is –i, we can get and brandish the rendered picture.

Conclusion

As we tin see, Azure gives us all the required tools to develop and debug for the cloud.

I sincerely invite you lot to install the SDK to develop your own raytracer !

To get farther

Some useful links:

  • https://blogs.msdn.com/b/windowsazure/
  • https://world wide web.windowsazure.com/en-us/develop/downloads/

Silverlight five is out!

homeSlide5

Information technology's a really large pleasance for me to denote that Silverlight 5 is finally available:

https://www.microsoft.com/silverlight/

The Silverlight five Toolkit was also updated to support the RTM: https://silverlight.codeplex.com/releases/view/78435

And don't forget to have a look to my web log most all the new features of the toolkit: https://blogs.msdn.com/b/eternalcoding/archive/2011/12/ten/silverlight-toolkit-september-2011-for-silverlight-5-what-due south-new.aspx)

And of class, Babylon was updated for the RTM too: https://code.msdn.microsoft.com/Babylon-3D-engine-f0404ace

For all the downloads and the features listing, please get to: https://www.silverlight.net/learn/overview/what's-new-in-silverlight-v

Security and 3D

Beginning of all, please read this article: https://blogs.msdn.com/b/eternalcoding/annal/2011/x/eighteen/some-reasons-why-my-3d-is-not-working-with-silverlight-5.aspx

By the mode, yous may feel security errors with Silverlight v RTM when you lot want to employ the wonderful new 3D characteristic. In fact, some graphics drivers may allow malicious code to execute. That may lead to an unwanted hard reset or a blue screen.

Starting with the beta version, to protect users for this kind of trouble, we initiate a first scenario where all Windows XP Display Driver Model (XPDM) drivers on Windows XP, Windows Vista, and Windows vii will exist blocked by default. Permission will be granted automatically in elevated trust scenarios and Windows Display Driver Model (WDDM) drivers will non require user consent at run-time.

Simply as always, features, including security features, continue to be refined and added during post-beta development.

And for the RTM version, there were a number of approaches considered to further improve security and stability, just the solution to block 3D in partial trust by default was the best choice for this release. Permission is still granted automatically in elevated trust scenarios.

To grant 3D permissions, you simply take to correct click on your Silverlight plugin, get to the Permissions tab and permit your application:

Y'all tin of course help your users notice and understand this past using the following lawmaking in order to tailor an skilful user feel:






  1. if (GraphicsDeviceManager.Current.RenderMode != RenderMode.Hardware)


  2. {


  3. switch (GraphicsDeviceManager.Current.RenderModeReason)


  4. {


  5. case RenderModeReason.GPUAccelerationDisabled:


  6. throw new Exception(Strings.NoGPUAcceleration);


  7. example RenderModeReason.SecurityBlocked:


  8. throw new Exception(Strings.HardwareAccelerationBlockedBySecurityReason);


  9. case RenderModeReason.Not3DCapable:


  10. throw new Exception(Strings.HardwareAccelerationNotAvailable);


  11. example RenderModeReason.TemporarilyUnavailable:


  12. throw new Exception(Strings.HardwareAccelerationNotAvailable);


  13. }


  14. }




It is really of import to explain your users why the 3D is deactivated. As there is a potential security hole, it is their responsibility to allow 3D experience.

Back up and lifecyle

The support status for Silverlight is now updated for SL5:

https://back up.microsoft.com/gp/lifean45#sl5

Here is the extract for Silverlight 5:

"Silverlight v – Microsoft volition provide assisted and unassisted no charge support for customers using versions of Silverlight v. Paid support options are available to customers requiring support with issues beyond install and upgrade issues. Microsoft will continue to send updates to the Silverlight five runtime or Silverlight 5 SDK, including updates for security vulnerabilities equally determined by the MSRC. Developers using the Silverlight 5 development tools and developing applications for Silverlight five can use paid assisted-back up options to receive development support.

Silverlight five volition support the browser versions listed on this folio through 10/12/2021 , or though the support lifecycle of the underlying browsers, whichever is shorter. As browsers evolve, the support page volition be updated to reflect levels of compatibility with newer browser versions."

Silverlight Toolkit (December 2011) for Silverlight 5–What's new?

The new version of the Silverlight Toolkit (December 2011) for Silverlight 5 is out and yous tin grab it here:

https://silverlight.codeplex.com/releases/view/78435

Update: Babylon Engine now uses Silverlight v Toolkit: https://lawmaking.msdn.microsoft.com/Babylon-3D-engine-f0404ace

I had the pleasure of working on this version and I'm pleased to write this article to help you lot observe how the Toolkit enhances Silverlight 5 with the following features:

  1. Seamless integration of 3D models and other assets with the Content Pipeline
  2. New Visual Studio templates for creating:
    1. Silverlight 3D Application
    2. Silverlight 3D Library
    3. Silverlight Issue
  3. New samples to demo these features

Seamless integration with the Content Pipeline

The toolkit comes with a new assembly : Microsoft.Xna.Framework.Content.dll. This assembly allows you to load avails from the .xnb file format (produced by the Content Pipeline).

Using the new Visual Studio templates (which I will describe afterwards), y'all can now easily port existing 3D projects directly to Silverlight 5!

Microsoft.Xna.Framework.Content.dll assembly will add together the following classes to Silverlight 5:

  • ContentManager
  • Model
  • SpriteFont and SpriteBatch

The toolkit comes also with the Microsoft.Xna.Framework.Tookit.dll assembly which will add the post-obit classes to Silverlight 5:

  • SilverlightEffect
  • Mouse, MouseState
  • Keyboard, KeyboardState

ContentManager

The documentation for this class can be institute here:
https://msdn.microsoft.com/en-u.s.a./library/microsoft.xna.framework.content.contentmanager.aspx

The ContentManager class is the representative for the Content Pipeline within your lawmaking. It is responsible for loading objects from .xnb files.

To create a ContentManager you just have to phone call the following code:






  1. ContentManager contentManager = new ContentManager(null, "Content");




There are restrictions for this grade : The ContentManager for Silverlight can only support one Content project and the RootDirectory must exist set to "Content"

Using information technology is actually simple considering it provides a elementary Load method which tin can be used to create your objects:






  1. // Load fonts


  2. hudFont = contentManager.Load<SpriteFont>("Fonts/Hud");



  3. // Load overlay textures


  4. winOverlay = contentManager.Load<Texture2D>("Overlays/you_win");



  5. // Music


  6. backgroundMusic = contentManager.Load<SoundEffect>("Sounds/Music");




Model

The documentation for this grade tin can be found here:
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.model.aspx

The model class has the same API as in XNA 4 and information technology volition allow you to load and render 3D models from XNB files:






  1. // Draw the model.


  2. Model tankModel = content.Load<Model>("tank");


  3. tankModel.Draw();




Y'all tin can also employ bones if your model supports them:






  1. Model tankModel = content.Load<Model>("tank");


  2. tankModel.Root.Transform = globe;


  3. tankModel.CopyAbsoluteBoneTransformsTo(boneTransforms);



  4. // Draw the model.


  5. foreach (ModelMesh mesh in tankModel.Meshes)


  6. {


  7. foreach (BasicEffect effect in mesh.Furnishings)


  8. {


  9. effect.Earth = boneTransforms[mesh.ParentBone.Index];


  10. event.View = view;


  11. issue.Project = projection;



  12. effect.EnableDefaultLighting();


  13. }



  14. mesh.Describe();


  15. }




You tin import models using .x or .fbx format:

And thanks to the FBX importer, you can also import .3ds, .obj, .dxf and even Collada.

SpriteFont & SpriteBatch

The documentation for these classes tin can be found hither:
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.spritebatch.aspx
https://msdn.microsoft.com/en-usa/library/microsoft.xna.framework.graphics.spritefont.aspx

The SpriteBatch course is used to display 2D textures on peak of the render. You can use them for displaying a UI or sprites.






  1. SpriteBatch spriteBatch = new SpriteBatch(graphicsDevice);



  2. spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque);



  3. spriteBatch.Draw(texture, new Rectangle(0, 0, width, superlative), Color.White);



  4. spriteBatch.End();




Equally you can see, SpriteBatch only needs a texture to display.

SpriteFont allows you to use sprites to brandish text.






  1. SpriteFont hudFont = contentManager.Load<SpriteFont>("Fonts/Hud");


  2. spriteBatch.DrawString(hudFont, value, position + new Vector2(1.0f, 1.0f), Colour.Black);


  3. spriteBatch.DrawString(hudFont, value, position, color);




SpriteFont relies on SpriteBatch to draw and needs a font definition from the ContentManager:

SilverlightEffect

The toolkit introduces a new class called SilverlightEffect which can exist used to use .fx files.

It also back up .slfx which is the default extension. There is no difference between .slfx and .fx but as XNA Effect Processor is already associated with .fx, the Silverlight Content Pipeline had to select another one.

You can now define a complete effect within a Content projection and employ it for rendering your models.

To exercise so:

  • Create a .fx file with a to the lowest degree one technique
  • Shader entry points must be parameterless
  • Define return states

For example here is a uncomplicated .fx file:






  1. float4x4 WorldViewProjection;


  2. float4x4 World;


  3. float3 LightPosition;



  4. // Structs


  5. struct VS_INPUT


  6. {


  7. float4 position : POSITION;


  8. float3 normal : NORMAL;


  9. float4 colour : COLOR0;


  10. };



  11. struct VS_OUTPUT


  12. {


  13. float4 position : POSITION;


  14. float3 normalWorld : TEXCOORD0;


  15. float3 positionWorld : TEXCOORD1;


  16. float4 color : COLOR0;


  17. };



  18. // Vertex Shader


  19. VS_OUTPUT mainVS(VS_INPUT In)


  20. {


  21. VS_OUTPUT Out = (VS_OUTPUT);



  22. // Compute projected position


  23. Out.position = mul(In.position, WorldViewProjection);



  24. // Compute world normal


  25. Out.normalWorld = mul(In.normal,(float3x3) WorldViewProjection);



  26. // Compute world position


  27. Out.positionWorld = (mul(In.position, World)).xyz;



  28. // Transmit vertex color


  29. Out.color = In.colour;



  30. return Out;


  31. }



  32. // Pixel Shader


  33. float4 mainPS(VS_OUTPUT In) : COLOR


  34. {


  35. // Light equation


  36. float3 lightDirectionW = normalize(LightPosition – In.positionWorld);


  37. float ndl = max(, dot(In.normalWorld, lightDirectionW));



  38. // Final color


  39. render float4(In.color.rgb * ndl, 1);


  40. }



  41. // Technique


  42. technique MainTechnique


  43. {


  44. laissez passer P0


  45. {


  46. VertexShader = compile vs_2_0 mainVS(); // Must be a not-parameter entry point


  47. PixelShader = compile ps_2_0 mainPS(); // Must be a non-parameter entry point


  48. }


  49. }




The Toolkit volition add required processors to the Content Pipeline in gild to create the .xnb file for this upshot:

To use this effect, y'all but have to instantiate a new SilverlightEffect inside your code:






  1. mySilverlightEffect = scene.ContentManager.Load<SilverlightEffect>("CustomEffect");




Then, you tin can retrieve effect'south parameters:






  1. worldViewProjectionParameter = mySilverlightEffect.Parameters["WorldViewProjection"];


  2. worldParameter = mySilverlightEffect.Parameters["World"];


  3. lightPositionParameter = mySilverlightEffect.Parameters["LightPosition"];




To render an object with your issue, it is the aforementioned code equally in XNA 4:






  1. worldParameter.SetValue(Matrix.CreateTranslation(1, 1, 1));


  2. worldViewProjectionParameter.SetValue(WorldViewProjection);


  3. lightPositionParameter.SetValue(LightPosition);


  4. foreach (var pass in mySilverlightEffect.CurrentTechnique.Passes)


  5. {


  6. // Apply pass


  7. pass.Use();



  8. // Set vertex buffer and index buffer


  9. graphicsDevice.SetVertexBuffer(vertexBuffer);


  10. graphicsDevice.Indices = indexBuffer;



  11. // The shaders are already set so we tin draw primitives


  12. graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, VerticesCount, 0, FaceCount);


  13. }




Texture2D, TextureCube & SoundEffect

Silverlight five provides Texture2D, TextureCube and SoundEffect classes. With the Toolkit, you lot volition be able to load them from the ContentManager:






  1. // Load overlay textures


  2. winOverlay = contentManager.Load<Texture2D>("Overlays/you_win");



  3. // Music


  4. backgroundMusic = contentManager.Load<SoundEffect>("Sounds/Music");




Mouse and Keyboard

In society to facilitate porting existing 3D applications and to accommodate polling input application models, we also added partial support for Microsoft.Xna.Framework.Input namespace.

So you will be able to request MouseState and KeyboardState everywhere y'all want:






  1. public MainPage()


  2. {


  3. InitializeComponent();



  4. Mouse.RootControl = this;


  5. Keyboard.RootControl = this;


  6. }




However, there is a slight difference from original XNA on other endpoints: you have to register the root command which volition provide the events for Mouse and Keyboard. The MouseState positions will be relative to the upper left corner of this control:






  1. individual void myDrawingSurface_Draw(object sender, DrawEventArgs eastward)


  2. {


  3. // Return scene


  4. scene.Draw();



  5. // Permit'southward go for another plow!


  6. e.InvalidateSurface();



  7. // Get mouse and keyboard state


  8. MouseState mouseState = Mouse.GetState();


  9. KeyboardState keyboardState = Keyboard.GetState();






  10. }




The MouseState and KeyboardState are similar to XNA versions:

  • https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.input.mousestate.aspx
  • https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.input.keyboardstate.aspx

Extensibility

Silverlight Content Pipeline can be extended the same way equally the XNA Content Pipeline on other endpoints. You can provide your own implementation for loading avails from elsewhere than the embedded .xnb files.

For example you can write a class that volition stream .xnb from the network. To do then, y'all have to  inherit from ContentManager and provide your own implementation for OpenStream:






  1. public class MyContentManager : ContentManager


  2. {


  3. public MyContentManager() : base of operations(null)


  4. {



  5. }



  6. protected override System.IO.Stream OpenStream(cord assetName)


  7. {


  8. render base of operations.OpenStream(assetName);


  9. }


  10. }




You can also provide our own type reader. Here is for example the custom type reader for SilverlightEffect:






  1. ///


  2. /// Read SilverlightEffect.


  3. ///


  4. public class SilverlightEffectReader : ContentTypeReader<SilverlightEffect>


  5. {


  6. ///


  7. /// Read and create a SilverlightEffect


  8. ///


  9. protected override SilverlightEffect Read(ContentReader input, SilverlightEffect existingInstance)


  10. {


  11. int techniquesCount = input.ReadInt32();


  12. EffectTechnique[] techniques = new EffectTechnique[techniquesCount];



  13. for (int techniqueIndex = 0; techniqueIndex < techniquesCount; techniqueIndex++)


  14. {


  15. int passesCount = input.ReadInt32();


  16. EffectPass[] passes = new EffectPass[passesCount];



  17. for (int passIndex = 0; passIndex < passesCount; passIndex++)


  18. {


  19. cord passName = input.ReadString();



  20. // Vertex shader


  21. int vertexShaderByteCodeLength = input.ReadInt32();


  22. byte[] vertexShaderByteCode = input.ReadBytes(vertexShaderByteCodeLength);


  23. int vertexShaderParametersLength = input.ReadInt32();


  24. byte[] vertexShaderParameters = input.ReadBytes(vertexShaderParametersLength);



  25. // Pixel shader


  26. int pixelShaderByteCodeLength = input.ReadInt32();


  27. byte[] pixelShaderByteCode = input.ReadBytes(pixelShaderByteCodeLength);


  28. int pixelShaderParametersLength = input.ReadInt32();


  29. byte[] pixelShaderParameters = input.ReadBytes(pixelShaderParametersLength);



  30. MemoryStream vertexShaderCodeStream = new MemoryStream(vertexShaderByteCode);


  31. MemoryStream pixelShaderCodeStream = new MemoryStream(pixelShaderByteCode);


  32. MemoryStream vertexShaderParametersStream = new MemoryStream(vertexShaderParameters);


  33. MemoryStream pixelShaderParametersStream = new MemoryStream(pixelShaderParameters);



  34. // Instanciate pass


  35. SilverlightEffectPass currentPass = new SilverlightEffectPass(passName, GraphicsDeviceManager.Current.GraphicsDevice, vertexShaderCodeStream, pixelShaderCodeStream, vertexShaderParametersStream, pixelShaderParametersStream);


  36. passes[passIndex] = currentPass;



  37. vertexShaderCodeStream.Dispose();


  38. pixelShaderCodeStream.Dispose();


  39. vertexShaderParametersStream.Dispose();


  40. pixelShaderParametersStream.Dispose();



  41. // Render states


  42. int renderStatesCount = input.ReadInt32();



  43. for (int renderStateIndex = 0; renderStateIndex < renderStatesCount; renderStateIndex++)


  44. {


  45. currentPass.AppendState(input.ReadString(), input.ReadString());


  46. }


  47. }



  48. // Instanciate technique


  49. techniques[techniqueIndex] = new EffectTechnique(passes);


  50. }



  51. return new SilverlightEffect(techniques);


  52. }


  53. }




New Visual Studio templates

The toolkit will install two new project templates and a new item template:

Silverlight3DApp

This template will produce a total working Silverlight 3D application.

The new solution will be composed of 4 projects:

  • Silverlight3DApp : The primary project
  • Silverlight3DAppContent : The content project attached with the master project
  • Silverlight3DWeb : The web site that volition brandish the main project
  • Silverlight3DWebContent : A content project attached to the website if you desire to stream your .xnb from the website instead of using embedded ones. This will permit you distribute a smaller .xap.

The master project (Silverlight3DApp) is built around two objects:

  • A sceneobject which
    • Create the ContentManager
    • Handle the DrawingSurface Describe event
  • A cubeobject
    • Create a vertex buffer and index buffer
    • Use the ContentManager to retrieve a SilverlightEffect (Customeffect.slfx) from the content project
    • Configure and use the SilverlightEffect to return

Silverlight3DLib

This template will produce a Silverlight Library without whatsoever content but with all Microsoft.Xna.Framework references set:

And the resulting projection will look like:

SilverlightEffect

This item template can exist used within a Content project to add a custom .slfx file that will work with SilverlightEffect class:

The file content will be the following:






  1. float4x4 Globe;


  2. float4x4 View;


  3. float4x4 Project;



  4. // TODO: add effect parameters here.



  5. struct VertexShaderInput


  6. {


  7. float4 Position : POSITION0;



  8. // TODO: add input channels such equally texture


  9. // coordinates and vertex colors here.


  10. };



  11. struct VertexShaderOutput


  12. {


  13. float4 Position : POSITION0;



  14. // TODO: add vertex shader outputs such as colors and texture


  15. // coordinates here. These values will automatically be interpolated


  16. // over the triangle, and provided as input to your pixel shader.


  17. };



  18. VertexShaderOutput VertexShaderFunction(VertexShaderInput input)


  19. {


  20. VertexShaderOutput output;



  21. float4 worldPosition = mul(input.Position, World);


  22. float4 viewPosition = mul(worldPosition, View);


  23. output.Position = mul(viewPosition, Projection);



  24. // TODO: add your vertex shader code hither.



  25. render output;


  26. }



  27. float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0


  28. {


  29. // TODO: add your pixel shader code here.



  30. return float4(1, , , 1);


  31. }



  32. technique Technique1


  33. {


  34. pass Pass1


  35. {


  36. // TODO: set renderstates here.



  37. VertexShader = compile vs_2_0 VertexShaderFunction();


  38. PixelShader = compile ps_2_0 PixelShaderFunction();


  39. }


  40. }




New samples to demo these features

Finally, to aid you lot discover and learn all these features, nosotros added some cools samples:

Bloom

This sample shows yous how to use sprites to accomplish postal service-processing furnishings such as "bloom". It likewise uses the Content Pipeline to import a tank model from a .fbx file.

CustomModelEffect

This sample shows y'all how custom furnishings tin can be applied to a model using the Content Pipeline.

Generated geometry

This sample shows how 3D models can be generated by lawmaking during the Content Pipeline build process.

Particles

This sample introduces the concept of a particle organisation, and shows how to depict particle furnishings using SpriteBatch. Two particle effects are demonstrated: an explosion and a ascent plume of smoke:

Primitives3D

This sample provides hands reusable lawmaking for drawing basic geometric primitives:

Platformer

This sample is a complete game with 3 levels provided (you lot can easily add yours). It shows the usage of SpriteBatch, SpriteFont and SoundEffect within a platform game. Information technology also uses Keyboard course to control the player.

SimpleAnimation

This sample shows how to apply program controlled rigid body animation to a 3D model loaded with the ContentManager:

Skinning

This sample shows how to procedure and render a skinned graphic symbol model using the Content Pipeline.

Conclusion

Equally you noticed, all these new additions to the Silverlight Toolkit are made to brand it easy to go started with new Silverlight 3D features by providing developer tools to better usability and productivity.

You tin at present easily starting time a new project that leverages both concepts of XNA and Silverlight. Information technology becomes like shooting fish in a barrel to work with 3D concepts and resource like shaders, model, sprites, furnishings, etc…

Nosotros also attempt to reduce the effort to port existing 3D applications to Silverlight.

So at present information technology's up to you to discover the wonderful world of 3D using Silverlight 5!

Silverlight 5 Toolkit Compile error :-2147024770 (0, 0): error : Unknown compile mistake (check flags confronting DX version)

(Woow what a funny title!)

Some Silverlight five Toolkit users transport me mails nearly this error message:

Error 1 Compile error -2147024770
(0, 0): error : Unknown compile error (check flags confronting DX version) (myfile.slfx)

To correct the problem, you just have to install the latest DirectX Runtime:

https://www.microsoft.com/download/en/details.aspx?id=8109

This error is generated by the Silverlight consequence file compiler which wants to use the DirectX Effect compiler and don't manage to locate it.

The DirectX Effect compiler is located in a library called d3dx9_xx.dll. The "_xx_" part may vary according to the version of the DirectX SDK used (in the instance of Silverlight five Toolkit, we used the June 2010 version which relies on d3dx9_43.dll).

The trouble is that this library is not installed past default on Windows (but a lot of applications installed information technology). That's why you may be required to installed the DirectX Runtime which comes with all versions of the library.

Kinect for Windows beta 2 is out

The new site for Kinect for Windows and the new beta of the SDK are out!

Kinect

This new version focuses on stability and operation:

  • Faster and improved skeletal tracking
  • Condition change support
  • Improved articulation tracking
  • 64-flake support
  • Audio can be used from UI thread

Nosotros also announce the release date of the commercial version for early on 2012.

Faster and improved skeletal tracking

With updates to the multi-core exemplar, Kinect for Windows is now twenty% faster than information technology was in the last release (beta one refresh). Also, the accuracy charge per unit of skeletal tracking and articulation recognition and been substantially improved.

When using two Kinects, you can at present specify which 1 is used for skeletal tracking.

Condition change back up

You lot can at present plug and unplug your Kinect without losing work. API support for detecting and managing device status changes, such as device unplugged, device plugged in, power unplugged, etc. Apps can reconnect to the Kinect device after it is plugged in, after the computer returns from suspend, etc..

Improved joint tracking

Substantially improve the accuracy rate of articulation recognition and tracking.

64-flake back up

The SDK tin can be used to build 64-bit applications. Previously, only 32-bit applications could exist congenital.

Sound tin can exist used from UI thread

Developers using the audio inside WPF no longer need to access the DMO from a separate thread. Y'all can create the KinectAudioSource on the UI thread and simplify your code.

Additional information

Furthermore, this new version now supports Windows "eight" (Desktop side) Sourire

The new site and the sdk can be establish hither:

https://www.kinectforwindows.org

To employ this new version you only demand to recompile your lawmaking as no breaking changes were introduced.

The Kinect Toolbox was obviously updated to support the new SDK:

https://kinecttoolbox.codeplex.com/

The nuGet bundle can be found hither:

https://nuget.org/List/Packages/KinectToolbox

Some reasons why my 3D is non working with Silverlight 5

The aim of this post is to requite you some tricks to enable 3D experiences with Silverlight five for your applications.

Simply first of all, let's see how you lot tin can activate accelerated 3D support inside a Silverlight 5 project.

Standard style

To actuate accelerated 3D support, the host of Silverlight must activate it using a param named "enableGPUAcceleration":

Past doing this, you will allow Silverlight to employ the graphic carte's power to render XAML elements. And at the aforementioned fourth dimension, you will actuate the accelerated 3D support.

Troubleshooting

You tin detect if 3D is activated or not in your code through the GraphicsDeviceManager class:






  1. // Check if GPU is on


  2. if (GraphicsDeviceManager.Current.RenderMode != RenderMode.Hardware)


  3. {


  4. MessageBox.Show("Please activate enableGPUAcceleration=true on your Silverlight plugin page.", "Warning", MessageBoxButton.OK);


  5. }




The RenderMode property will be set to Unavailable when 3D is not activated.

In this case, the belongings GraphicsDeviceManager.Current.RenderModeReason will be fix to one of these values:

  • Not3DCapable
  • GPUAccelerationDisabled
  • SecurityBlocked
  • TemporarilyUnavailable

Not3DCapable

You volition get this reason when your graphic card is likewise former to support 3D required features such equally shader model two.0.

GPUAccelerationDisabled

You forgot to set enableGPUAcceleration to true in the hosting HTML page.

SecurityBlocked

When you launch your application, a specific domain entry for 3D can be set to the Permissions tab of the Silverlight Configuration panel.

You will not run across this entry unless your domain group policy set it (and in this case you take to ask permissions to your administrator) or yous run your Silverlight application under Windows XP.

When you run a Silverlight awarding nether Windows XP, the following behavior happen:

  • 3D is enabled automatically in elevated trust (out of browser)
  • The starting time fourth dimension a user runs a non-elevated 3D application, a domain entry set to Deny is added to the Permissions tab of the Silverlight Configuration panel

To use a 3D application under Windows XP in a non-elevated context, you lot must change the pre-created Deny entry to Let.

TemporarilyUnavailable

This happen when the device is lost (for case under lock screen on Windows XP.  It doesn't happen much in WDDM) where Silverlight expects the rendering surface to return at some signal.

Additional tips

One other of import indicate to know is that 3D won't work correctly inside windowless mode. In this case, the draw event will be driven from the UI thread and so will be fired simply during UI events such every bit page coil for case.

Decision

I hope this mail service was useful to help you utilise the wonderful accelerated 3D experience of Silverlight 5.

As you can see, the better solution to handle 3D support is to utilise GraphicsDeviceManager.Current.RenderModeReason.

  • Silverlight 5 downloads
  • Silverlight 5 Toolkit
  • Silverlight five RC–What's new in the 3D earth-
  • Silverlight Toolkit (September 2011) for Silverlight 5–What'due south new-
  • New version of Babylon engine for Silverlight five and Silverlight v Toolkit

New version of Babylon engine for Silverlight 5 and Silverlight 5 Toolkit

I've only updated the source of Babylon engine. You can grab the bits hither:

https://lawmaking.msdn.microsoft.com/silverlight/Babylon-3D-engine-f0404ace

This new version uses the new content pipeline of the toolkit and is compiled using Silverlight v RC.

You can play with the exposed shaders using the new SilverlightEffect grade.

It is now time to unleash the power of accelerated 3D!!

DOWNLOAD HERE

Posted by: gutierrezbeinale64.blogspot.com

Post a Comment

Previous Post Next Post

Iklan Banner setelah judul