Lighten Darken Skin tone in C# and Silverlight for #windowsphone #wpdev

I started Cool Camera as a stop gap application trying to figure my way around SatNav app (which started while i was answering some posts on AppHub forums). Current availble version stands at 1.6 and 1.7 is with Microsoft.

It has come a long way since 1.0 – which only supported: Taking pictures and a camera style HUD. The picture viewer was very basic. Just display the image in Image control. Since then i have * added support for filters, added video recording and playback, added album viewer. I have worked a bit more on image processing and i am a bit better at image processing.

The first set of filters were added to app thanks to René Schulte – I remember coming across a while back and it provided very handy way to creating and applying effects to WriteableBitmaps. I used the few supplied to get stared however before long i was asked if i could provide a way of making images darker. The most common scenario is when you use flash and the images are too white – especially faces. As i started, i remembered face detection post by Rene.–Silverlight-4-Real-Time-Face-Detection.

I started with Rene’s YCbCr code and the first pass to detect whether color falls into skin tone range. The first pass for skin tone detection worked just fine, how search began on how to increase or decrease luminance of image. I came across HSLColor which had an ligthen / darken method but that did’t work so eventually, i used Lerp

public int[] Process(int[] inputPixels, int width, int height)
          var resultPixels = new int[inputPixels.Length];

          // Threshold every pixel
          for (int i = 0; i < inputPixels.Length; i++)
              int c = inputPixels[i];

              var ycbcr = YCbCrColor.FromArgbColori(c);
              if (ycbcr.Y >= LowerThreshold.Y && ycbcr.Y <= UpperThreshold.Y
               && ycbcr.Cb >= LowerThreshold.Cb && ycbcr.Cb <= UpperThreshold.Cb
               && ycbcr.Cr >= LowerThreshold.Cr && ycbcr.Cr <= UpperThreshold.Cr)
                  // skin tone match 
                  System.Windows.Media.Color sc = System.Windows.Media.Color.FromArgb((byte)(c >> 24), (byte)(c >> 16), (byte)(c >> 8), (byte)c);

                  Microsoft.Xna.Framework.Color xc = new Microsoft.Xna.Framework.Color(sc.R, sc.G, sc.B, sc.A);
                  xc = Color.Lerp(xc, Color, Amout);

                  c = (255 << 24) | ((byte)(xc.R > 255 ? 255 : xc.R) << 16) | ((byte)(xc.G > 255 ? 255 : xc.G) << 8) | (byte)(xc.B > 255 ? 255 : xc.B);
              resultPixels[i] = c;

          return resultPixels;

Now all you need to do is pass the amount to Lerp and the color. To Darken you pass Black, to lighten, you pass White.


Slydr Progress Update

Slydr rework is close to completion. Now slydr uses entire trace to determine match. There is no speed detection.

You can trace fast or slow and you will get same result.. unlike before. I have also consumed a new algorithm which genenrates a probability / similarity between 0 and 1. The algorithm executes after Levenstein Distance and is slower so i had to resort to changes to reduce number of items i match and it seems to work fine now. Might continue optimising it to remove unncessary work later.

As a result, i had to redo all dictionaries to ensure that in addition to word, i also had path data depending upon that language / layout. Its has taken me a long time as the code i used to generate new dictionaries was WPF but i tried to stay as close to Silverlight as i could.

On UI side, i have moved entire code to using TouchScreen TouchReported event unlike before where part of work was done in TouchReported and other was done using high level Mouse Enter event.

Instead of spawning new threads to check for new user added words, i have moved to using threadpool.

I have to test a few bits like migrating existing user added words to generate trace path and i am going to add persistance of user usage. In addition i need to add ability to reset user added words and usage from settings page.

I am sure there will still be complains but what can i do ?

how to remove diatrics / accent marks in Windows Phone 7.x

The usual way of removing diatrics doesn’t work in windows phone. So what you need to do is manually replace what you need to replace.

Here’s what i tend to do.

Note: It only replaces lowercase replacement (as that was what i needed )so keep that in mind… however nothing is stopping you from extending it.

public class StringUtil
   static char[] englishReplace = { 'e' };
   static char[] englishAccents = { 'é' };

   static char[] frenchReplace = { 'a', 'a', 'a', 'a', 'c', 'e', 'e', 'e', 'e', 'i', 'i', 'o', 'o', 'u', 'u', 'u' };
   static char[] frenchAccents = { 'à', 'â', 'ä', 'æ', 'ç', 'é', 'è', 'ê', 'ë', 'î', 'ï', 'ô', 'œ', 'ù', 'û', 'ü' };

   static char[] germanReplace = { 'a', 'o', 'u', 's' };
   static char[] germanAccents = { 'ä', 'ö', 'ü', 'ß' };

   static char[] spanishReplace = { 'a', 'e', 'i', 'o', 'u' };
   static char[] spanishAccents = { 'á', 'é', 'í', 'ó', 'ú' };

   static char[] catalanReplace = { 'a', 'e', 'e', 'i', 'i', 'o', 'o', 'u', 'u' };
   static char[] catalanAccents = { 'à', 'è', 'é', 'í', 'ï', 'ò', 'ó', 'ú', 'ü' };

   static char[] italianReplace = { 'a', 'e', 'e', 'i', 'o', 'o', 'u' };
   static char[] italianAccents = { 'à', 'è', 'é', 'ì', 'ò', 'ó', 'ù' };

   static char[] polishReplace = { 'a', 'c', 'e', 'l', 'n', 'o', 's', 'z', 'z' };
   static char[] polishAccents = { 'ą', 'ć', 'ę', 'ł', 'ń', 'ó', 'ś', 'ż', 'ź' };

   static char[] hungarianReplace = { 'a', 'e', 'i', 'o', 'o', 'o', 'u', 'u', 'u' };
   static char[] hungarianAccents = { 'á', 'é', 'í', 'ö', 'ó', 'ő', 'ü', 'ú', 'ű' };

   static char[] portugueseReplace = { 'a', 'a', 'a', 'a', 'e', 'e', 'i', 'o', 'o', 'o', 'u', 'u' };
   static char[] portugueseAccents = { 'ã', 'á', 'â', 'à', 'é', 'ê', 'í', 'õ', 'ó', 'ô', 'ú', 'ü' };

   static char[] czechReplace = { 'a', 'a', 'a', 'c', 'd', 'e', 'e', 'i', 'n', 'o', 'r', 's', 't', 'u', 'u', 'y', 'z' };
   static char[] czechAccents = { 'ã', 'á', 'á', 'č', 'ď', 'é', 'ě', 'í', 'ň', 'ó', 'ř', 'š', 'ť', 'ú', 'ů', 'ý', 'ž' };

   static char[] dutchReplace = { 'e', 'e', 'i', 'o', 'o', 'u' };
   static char[] dutchAccents = { 'é', 'ë', 'ï', 'ó', 'ö', 'ü' };

   static char[] turkishReplace = { 'c', 'e', 'e', 'g', 'i', 'i', 'o', 'o', 'u' };
   static char[] turkishAccents = { 'ç', 'é', 'ë', 'ğ', 'İ', 'ï', 'ó', 'ö', 'ü' };

   static char[] romanianReplace = { 'a', 'a', 'i', 's', 's', 't', 't' };
   static char[] romanianAccents = { 'ă', 'â', 'î', 'ş', 'ș', 'ţ', 'ț' };

   static char[] filipinoReplace = { 'a', 'a', 'a', 'e', 'e', 'e', 'i', 'i', 'i', 'o', 'o', 'o', 'u', 'u', 'u' };
   static char[] filipinoAccents = { 'á', 'à', 'â', 'é', 'è', 'ê', 'í', 'ì', 'î', 'ó', 'ò', 'ô', 'ú', 'ù', 'û' };

   static char[] ukarainianReplace = { 'i', 'r' };
   static char[] ukarainianAccents = { 'ї', 'ґ' };

   static char[] russianReplace = { 'b' };
   static char[] russianAccents = { 'ъ' };

   static char[] greekReplace = { 'α', 'ε', 'η', 'ι', 'ι', 'ι', 'ο', 'υ', 'υ', 'υ', 'ω' };
   static char[] greekAccents = { 'ά', 'έ', 'ή', 'ί', 'ϊ', 'ΐ', 'ό', 'ύ', 'ϋ', 'ΰ', 'ώ' };

   static char[] arabicAccents = { 'أ', 'إ', 'آ', 'ء', 'پ', 'ض', 'ذ', 'ـ', 'خ', 'خ', 'غ', 'ش', 'ة', 'ث', 'ً', 'ٰ', 'ؤ', 'ظ', 'ى', 'ئ' };
   static char[] arabicReplace = { 'ا', 'ا', 'ا', 'ا', 'ب', 'ص', 'د', 'ّ', 'ح', 'ك', 'ع', 'س', 'ت', 'ت', 'َ', 'َ', 'و', 'ط', 'ي', 'ي' };

   static char[] bulgarianReplace = { 'ь', 'и' };
   static char[] bulgarianAccents = { 'ъ', 'ѝ' };

   static char[] croatianReplace = { 'c','c','d','s','z' };
   static char[] croatianAccents = { 'č','ć','đ','š','ž' };

   static char[] estonianReplace = { 'a', 'o', 'o', 'u' };
   static char[] estonianAccents = { 'ä', 'ö', 'õ', 'ü' };

   static char[] icelandicReplace = { 'o' };
   static char[] icelandicAccents = { 'ö' };

   static char[] latvianReplace = { 'e' };
   static char[] latvianAccents = { 'ē' };

   static char[] slovakianReplace = { 'a', 'a', 'c', 'd', 'e', 'i', 'l', 'l', 'n', 'o', 'o', 'r', 's', 't', 'u', 'y', 'z' };
   static char[] slovakianAccents = { 'á', 'ä', 'č', 'ď', 'é', 'í', 'ĺ', 'ľ', 'ň', 'ó', 'ô', 'ŕ', 'š', 'ť', 'ú', 'ý', 'ž' };

 public enum DictionaryDef

 static StringBuilder sbStripAccents = new StringBuilder();
 public static string RemoveDiacritics(string accentedStr, Config.DictionaryDef eDictionary)
   char[] replacement = null;
   char[] accents = null;
   switch (eDictionary)
      case DictionaryDef.Arabic:
         replacement = arabicReplace;
         accents = arabicAccents;

      case DictionaryDef.Slovak:
      case Config.DictionaryDef.SlovakAlt:
         replacement = slovakianReplace;
         accents = slovakianAccents;

      case DictionaryDef.Latvian:
         replacement = latvianReplace;
         accents = latvianAccents;

      case DictionaryDef.Icelandic:
         replacement = icelandicReplace;
         accents = icelandicAccents;

      case DictionaryDef.Estonian:
         replacement = estonianReplace;
         accents = estonianAccents;

      case DictionaryDef.Bulgarian:
         replacement = bulgarianReplace;
         accents = bulgarianAccents;

      case DictionaryDef.Romanian:
         replacement = romanianReplace;
         accents = romanianAccents;

      case DictionaryDef.Croatian:
      case DictionaryDef.Slovenian:
         replacement = croatianReplace;
         accents = croatianAccents;

      case DictionaryDef.English:
         replacement = englishReplace;
         accents = englishAccents;

      case DictionaryDef.French:
      case DictionaryDef.CanadianFrench:
      case DictionaryDef.SwissFrench:
         replacement = frenchReplace;
         accents = frenchAccents;

      case DictionaryDef.German:
         replacement = germanReplace;
         accents = germanAccents;

      case DictionaryDef.Spanish:
         replacement = spanishReplace;
         accents = spanishAccents;

      case DictionaryDef.Catalan:
         replacement = catalanReplace;
         accents = catalanAccents;

      case DictionaryDef.Italian:
         replacement = italianReplace;
         accents = italianAccents;

      case DictionaryDef.Polish:
         replacement = polishReplace;
         accents = polishAccents;

      case DictionaryDef.Hungarian:
         replacement = hungarianReplace;
         accents = hungarianAccents;

      case DictionaryDef.Portuguese:
      case DictionaryDef.BrazilianPortuguese:
         replacement = portugueseReplace;
         accents = portugueseAccents;

      case DictionaryDef.Czech:
      case DictionaryDef.CzechAlt:
         replacement = czechReplace;
         accents = czechAccents;

      case DictionaryDef.Dutch:
         replacement = dutchReplace;
         accents = dutchAccents;

      case DictionaryDef.Turkish:
         replacement = turkishReplace;
         accents = turkishAccents;

      case DictionaryDef.Russian:
         replacement = russianReplace;
         accents = russianAccents;

      case DictionaryDef.Ukrainian:
         replacement = ukarainianReplace;
         accents = ukarainianAccents;

      case DictionaryDef.Greek:
         replacement = greekReplace;
         accents = greekAccents;

         return accentedStr;

   if (accents != null &amp;&amp; replacement != null &amp;&amp; accentedStr.IndexOfAny(accents) &gt; -1)
      sbStripAccents.Length = 0;
      for (int i = 0; i &lt; accents.Length; i++)
         sbStripAccents.Replace(accents[i], replacement[i]);

      return sbStripAccents.ToString();
      return accentedStr;

Alarm Clock 1.6 submitted, Slydr 2.4 now available for download

Slydr 2.4 has passed certification day before yesterday and it has been available for download since yesterday. Mandarin / Cantonese now supported in Slydr.

I finally finished changes to Alarm Clock 1.6, adds 10 new fonts making a total of 24 and 3 new tones (a total of 9 now). The users can test alarm tones while in trial mode 🙂

I have also started changes to Slydr for 2.5. So far, i have modified Trie Node to contain a word or a list of words.

Say you have god and good. in trie, you can enter god and it should display both god and good with (good before god :)). However the logic in trie meant that only last entry was valid. So i have modified the code to allow for one or more entries

I have also used a different logic when using predictive typing. When adding 1st words, trie lists iterates through many words and its time consuming (faster with more character user types). i have modified the predictive code so now it doesn’t use trie when user enters first character) and it is so much faster now.

Slydr 2.4 Submitted / 2.3 availble in marketplace

Slydr 2.3 with predictive algorithm has made it through testing and it now available.
I thought it was automatically published but for some reason i forgot select that options so today morning i manually published it.

Slydr 2.4 is ready and has been submitted. It contains dictionaries for both simplified and traditional chinese. It also contains a few bug fixes, few enhancements (toggle key background for shift / spacebar etc) and a much asked for cursor / caret for the textbox.

There i finally got around to adding it.

BTW the chinese support is using pronouciation dictionaries. The user can enter / swipe complete pronouciation or just the first char of each pronounciation. The example below should make it clearer.
如果 ru guo or rg
是的 shi de or sd

Now i have to start thinking about what to do for the next release…

Slydr 2.3 submitted for testing

I have finally submitted 2.3 for testing. It includes a significant change in functionality i.e. predictive typing.

Because the requirements for predictive typing are different as compared to sliding, i had to rework my word dictionaries slightly. hopefully it doesnt take significantly longer to load. I have used TRIE implementation as i mentioned previously instead of TST.

Apart from that i have reduced the creation of “use and throw” threads 🙂 i already had thread in background for dictionary processing, now i use extra events to cancel existing operation rather than trying to kill the thread (which of course never worked in WP7)

Finally i updated char popup width to match key width in portrait or landscape mode.

I am working on Mandarin / Cantonese dictionaries. Requires a bit more time as the dictionaries are slightly unusual 🙂


Word Prediction Implemented using Trie

I know i talked about use of Ternary Search Tree TST earlier and TST is probably more efficient but i couldn’t the most comprehensive TST implementation could only return user specific matches based on length.

if you entered hi, it would return hi
if you entered hi*, you will get his, him etc.
if you entered hi**, you will get high, hind etc.

TST however had a simple contains for preliminary match.

This meant that i will need to run it multiple number of times to get overall matches and then sort it based on relevance. Too many calls equals eventually slower plus knowing keeping constant tab max word length and keep padding wildcards.

so I have settled on Trie, which allows individual char entries and provides option to see if its a full match (similar to contains) and gives you all matches in a single list. Sorting takes care of relevance and it works perfectly.

Thank goodness… One user complained about match list being a bit iffy, i am going to make each item have larger area which would allow more touch area !!!